00:00:00.000 Started by upstream project "autotest-nightly" build number 4157 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3519 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.210 Using shallow fetch with depth 1 00:00:00.210 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.210 > git --version # timeout=10 00:00:00.275 > git --version # 'git version 2.39.2' 00:00:00.275 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.308 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.308 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.321 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.335 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.347 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:06.347 > git config core.sparsecheckout # timeout=10 00:00:06.359 > git read-tree -mu HEAD # timeout=10 00:00:06.375 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:06.393 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:06.393 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.509 [Pipeline] Start of Pipeline 00:00:06.522 [Pipeline] library 00:00:06.523 Loading library shm_lib@master 00:00:06.524 Library shm_lib@master is cached. Copying from home. 00:00:06.542 [Pipeline] node 00:00:06.551 Running on WFP38 in /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:06.552 [Pipeline] { 00:00:06.563 [Pipeline] catchError 00:00:06.565 [Pipeline] { 00:00:06.579 [Pipeline] wrap 00:00:06.587 [Pipeline] { 00:00:06.595 [Pipeline] stage 00:00:06.597 [Pipeline] { (Prologue) 00:00:06.853 [Pipeline] sh 00:00:07.134 + logger -p user.info -t JENKINS-CI 00:00:07.146 [Pipeline] echo 00:00:07.147 Node: WFP38 00:00:07.152 [Pipeline] sh 00:00:07.440 [Pipeline] setCustomBuildProperty 00:00:07.452 [Pipeline] echo 00:00:07.453 Cleanup processes 00:00:07.456 [Pipeline] sh 00:00:07.731 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:07.732 3001962 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:07.743 [Pipeline] sh 00:00:08.027 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:08.027 ++ grep -v 'sudo pgrep' 00:00:08.027 ++ awk '{print $1}' 00:00:08.027 + sudo kill -9 00:00:08.027 + true 00:00:08.039 [Pipeline] cleanWs 00:00:08.048 [WS-CLEANUP] Deleting project workspace... 00:00:08.048 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.054 [WS-CLEANUP] done 00:00:08.057 [Pipeline] setCustomBuildProperty 00:00:08.068 [Pipeline] sh 00:00:08.348 + sudo git config --global --replace-all safe.directory '*' 00:00:08.433 [Pipeline] httpRequest 00:00:08.791 [Pipeline] echo 00:00:08.793 Sorcerer 10.211.164.101 is alive 00:00:08.799 [Pipeline] retry 00:00:08.801 [Pipeline] { 00:00:08.813 [Pipeline] httpRequest 00:00:08.818 HttpMethod: GET 00:00:08.818 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.818 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.835 Response Code: HTTP/1.1 200 OK 00:00:08.836 Success: Status code 200 is in the accepted range: 200,404 00:00:08.836 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:13.092 [Pipeline] } 00:00:13.105 [Pipeline] // retry 00:00:13.111 [Pipeline] sh 00:00:13.393 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:13.407 [Pipeline] httpRequest 00:00:14.073 [Pipeline] echo 00:00:14.074 Sorcerer 10.211.164.101 is alive 00:00:14.083 [Pipeline] retry 00:00:14.085 [Pipeline] { 00:00:14.096 [Pipeline] httpRequest 00:00:14.100 HttpMethod: GET 00:00:14.100 URL: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:14.100 Sending request to url: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:14.117 Response Code: HTTP/1.1 200 OK 00:00:14.117 Success: Status code 200 is in the accepted range: 200,404 00:00:14.117 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:01:16.672 [Pipeline] } 00:01:16.688 [Pipeline] // retry 00:01:16.696 [Pipeline] sh 00:01:16.983 + tar --no-same-owner -xf spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:01:19.533 [Pipeline] sh 00:01:19.818 + git -C spdk log --oneline -n5 00:01:19.818 92108e0a2 fsdev/aio: add support for null IOs 00:01:19.818 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:19.818 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:19.819 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:19.819 aa7c3b1e2 bdev/nvme: changed default config to multipath 00:01:19.829 [Pipeline] } 00:01:19.842 [Pipeline] // stage 00:01:19.851 [Pipeline] stage 00:01:19.854 [Pipeline] { (Prepare) 00:01:19.869 [Pipeline] writeFile 00:01:19.884 [Pipeline] sh 00:01:20.169 + logger -p user.info -t JENKINS-CI 00:01:20.182 [Pipeline] sh 00:01:20.466 + logger -p user.info -t JENKINS-CI 00:01:20.478 [Pipeline] sh 00:01:20.766 + cat autorun-spdk.conf 00:01:20.766 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.766 SPDK_TEST_NVMF=1 00:01:20.766 SPDK_TEST_NVME_CLI=1 00:01:20.766 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:20.766 SPDK_TEST_NVMF_NICS=e810 00:01:20.766 SPDK_RUN_ASAN=1 00:01:20.766 SPDK_RUN_UBSAN=1 00:01:20.766 NET_TYPE=phy 00:01:20.777 RUN_NIGHTLY=1 00:01:20.784 [Pipeline] readFile 00:01:20.814 [Pipeline] withEnv 00:01:20.816 [Pipeline] { 00:01:20.829 [Pipeline] sh 00:01:21.114 + set -ex 00:01:21.114 + [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf ]] 00:01:21.114 + source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:21.114 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.114 ++ SPDK_TEST_NVMF=1 00:01:21.114 ++ SPDK_TEST_NVME_CLI=1 00:01:21.114 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:21.114 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.114 ++ SPDK_RUN_ASAN=1 00:01:21.114 ++ SPDK_RUN_UBSAN=1 00:01:21.114 ++ NET_TYPE=phy 00:01:21.114 ++ RUN_NIGHTLY=1 00:01:21.114 + case $SPDK_TEST_NVMF_NICS in 00:01:21.114 + DRIVERS=ice 00:01:21.114 + [[ rdma == \r\d\m\a ]] 00:01:21.114 + DRIVERS+=' irdma' 00:01:21.114 + [[ -n ice irdma ]] 00:01:21.114 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.114 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:21.114 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:21.114 rmmod: ERROR: Module i40iw is not currently loaded 00:01:21.114 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:21.114 + true 00:01:21.114 + for D in $DRIVERS 00:01:21.114 + sudo modprobe ice 00:01:21.114 + for D in $DRIVERS 00:01:21.114 + sudo modprobe irdma 00:01:21.373 + exit 0 00:01:21.383 [Pipeline] } 00:01:21.399 [Pipeline] // withEnv 00:01:21.405 [Pipeline] } 00:01:21.419 [Pipeline] // stage 00:01:21.429 [Pipeline] catchError 00:01:21.431 [Pipeline] { 00:01:21.446 [Pipeline] timeout 00:01:21.447 Timeout set to expire in 1 hr 0 min 00:01:21.449 [Pipeline] { 00:01:21.464 [Pipeline] stage 00:01:21.467 [Pipeline] { (Tests) 00:01:21.482 [Pipeline] sh 00:01:21.770 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:21.770 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:21.770 + DIR_ROOT=/var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:21.770 + [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest ]] 00:01:21.770 + DIR_SPDK=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:21.770 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:21.770 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk ]] 00:01:21.770 + [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:21.770 + mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:21.770 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:21.770 + [[ nvmf-cvl-phy-autotest == pkgdep-* ]] 00:01:21.770 + cd /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:21.770 + source /etc/os-release 00:01:21.770 ++ NAME='Fedora Linux' 00:01:21.770 ++ VERSION='39 (Cloud Edition)' 00:01:21.770 ++ ID=fedora 00:01:21.770 ++ VERSION_ID=39 00:01:21.770 ++ VERSION_CODENAME= 00:01:21.770 ++ PLATFORM_ID=platform:f39 00:01:21.770 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.770 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.770 ++ LOGO=fedora-logo-icon 00:01:21.770 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.770 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.770 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.770 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.770 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.770 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.770 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.770 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.770 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.770 ++ SUPPORT_END=2024-11-12 00:01:21.770 ++ VARIANT='Cloud Edition' 00:01:21.770 ++ VARIANT_ID=cloud 00:01:21.770 + uname -a 00:01:21.770 Linux spdk-wfp-38 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:21.770 + sudo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:01:25.140 Hugepages 00:01:25.140 node hugesize free / total 00:01:25.140 node0 1048576kB 0 / 0 00:01:25.140 node0 2048kB 0 / 0 00:01:25.140 node1 1048576kB 0 / 0 00:01:25.140 node1 2048kB 0 / 0 00:01:25.140 00:01:25.140 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.140 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:25.140 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:25.140 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:25.140 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:25.141 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:25.141 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:25.141 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:25.141 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:25.141 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:25.141 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:25.141 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:25.141 + rm -f /tmp/spdk-ld-path 00:01:25.141 + source autorun-spdk.conf 00:01:25.141 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.141 ++ SPDK_TEST_NVMF=1 00:01:25.141 ++ SPDK_TEST_NVME_CLI=1 00:01:25.141 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:25.141 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.141 ++ SPDK_RUN_ASAN=1 00:01:25.141 ++ SPDK_RUN_UBSAN=1 00:01:25.141 ++ NET_TYPE=phy 00:01:25.141 ++ RUN_NIGHTLY=1 00:01:25.141 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.141 + [[ -n '' ]] 00:01:25.141 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:25.141 + for M in /var/spdk/build-*-manifest.txt 00:01:25.141 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:25.141 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:25.141 + for M in /var/spdk/build-*-manifest.txt 00:01:25.141 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.141 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:25.141 + for M in /var/spdk/build-*-manifest.txt 00:01:25.141 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.141 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:25.141 ++ uname 00:01:25.141 + [[ Linux == \L\i\n\u\x ]] 00:01:25.141 + sudo dmesg -T 00:01:25.141 + sudo dmesg --clear 00:01:25.141 + dmesg_pid=3003453 00:01:25.141 + [[ Fedora Linux == FreeBSD ]] 00:01:25.141 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.141 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.141 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.141 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.141 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.141 + sudo dmesg -Tw 00:01:25.141 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.141 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\c\v\l\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.141 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.141 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.141 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.141 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.141 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.141 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.141 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.141 + spdk/autorun.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:25.141 Test configuration: 00:01:25.141 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.141 SPDK_TEST_NVMF=1 00:01:25.141 SPDK_TEST_NVME_CLI=1 00:01:25.141 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:25.141 SPDK_TEST_NVMF_NICS=e810 00:01:25.141 SPDK_RUN_ASAN=1 00:01:25.141 SPDK_RUN_UBSAN=1 00:01:25.141 NET_TYPE=phy 00:01:25.141 RUN_NIGHTLY=1 01:42:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:25.141 01:42:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:01:25.141 01:42:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:25.141 01:42:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.141 01:42:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.141 01:42:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.141 01:42:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.141 01:42:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.141 01:42:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.141 01:42:44 -- paths/export.sh@5 -- $ export PATH 00:01:25.141 01:42:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.141 01:42:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:01:25.141 01:42:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:25.141 01:42:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728430964.XXXXXX 00:01:25.141 01:42:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728430964.kCYONC 00:01:25.141 01:42:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:25.141 01:42:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:25.141 01:42:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:01:25.141 01:42:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.141 01:42:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.141 01:42:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:25.141 01:42:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:25.141 01:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.141 01:42:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:25.141 01:42:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:25.141 01:42:44 -- pm/common@17 -- $ local monitor 00:01:25.141 01:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.141 01:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.141 01:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.141 01:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.141 01:42:44 -- pm/common@21 -- $ date +%s 00:01:25.141 01:42:44 -- pm/common@25 -- $ sleep 1 00:01:25.141 01:42:44 -- pm/common@21 -- $ date +%s 00:01:25.141 01:42:44 -- pm/common@21 -- $ date +%s 00:01:25.141 01:42:44 -- pm/common@21 -- $ date +%s 00:01:25.141 01:42:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430964 00:01:25.141 01:42:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430964 00:01:25.141 01:42:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430964 00:01:25.141 01:42:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430964 00:01:25.141 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430964_collect-cpu-load.pm.log 00:01:25.141 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430964_collect-cpu-temp.pm.log 00:01:25.141 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430964_collect-vmstat.pm.log 00:01:25.141 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430964_collect-bmc-pm.bmc.pm.log 00:01:26.080 01:42:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:26.080 01:42:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.080 01:42:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.080 01:42:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:26.080 01:42:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.080 Tue Oct 8 11:42:45 PM UTC 2024 00:01:26.080 01:42:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.080 v25.01-pre-41-g92108e0a2 00:01:26.080 01:42:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.080 01:42:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.080 01:42:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:26.080 01:42:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:26.080 01:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.080 ************************************ 00:01:26.080 START TEST asan 00:01:26.080 ************************************ 00:01:26.080 01:42:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:26.080 using asan 00:01:26.080 00:01:26.080 real 0m0.000s 00:01:26.080 user 0m0.000s 00:01:26.080 sys 0m0.000s 00:01:26.080 01:42:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:26.080 01:42:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.080 ************************************ 00:01:26.080 END TEST asan 00:01:26.080 ************************************ 00:01:26.080 01:42:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.081 01:42:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.081 01:42:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:26.081 01:42:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:26.081 01:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.081 ************************************ 00:01:26.081 START TEST ubsan 00:01:26.081 ************************************ 00:01:26.081 01:42:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:26.081 using ubsan 00:01:26.081 00:01:26.081 real 0m0.000s 00:01:26.081 user 0m0.000s 00:01:26.081 sys 0m0.000s 00:01:26.081 01:42:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:26.081 01:42:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.081 ************************************ 00:01:26.081 END TEST ubsan 00:01:26.081 ************************************ 00:01:26.081 01:42:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.081 01:42:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.081 01:42:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.081 01:42:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:26.346 Using default SPDK env in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:01:26.346 Using default DPDK in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:26.605 Using 'verbs' RDMA provider 00:01:39.756 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.965 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.793 Creating mk/config.mk...done. 00:01:52.793 Creating mk/cc.flags.mk...done. 00:01:52.793 Type 'make' to build. 00:01:52.793 01:43:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:52.793 01:43:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.793 01:43:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.793 01:43:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.793 ************************************ 00:01:52.793 START TEST make 00:01:52.793 ************************************ 00:01:52.793 01:43:12 make -- common/autotest_common.sh@1125 -- $ make -j72 00:01:53.053 make[1]: Nothing to be done for 'all'. 00:02:03.056 The Meson build system 00:02:03.056 Version: 1.5.0 00:02:03.056 Source dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk 00:02:03.056 Build dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp 00:02:03.056 Build type: native build 00:02:03.056 Program cat found: YES (/usr/bin/cat) 00:02:03.056 Project name: DPDK 00:02:03.056 Project version: 24.03.0 00:02:03.056 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.056 C linker for the host machine: cc ld.bfd 2.40-14 00:02:03.056 Host machine cpu family: x86_64 00:02:03.056 Host machine cpu: x86_64 00:02:03.056 Message: ## Building in Developer Mode ## 00:02:03.056 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.056 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.056 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.056 Program python3 found: YES (/usr/bin/python3) 00:02:03.056 Program cat found: YES (/usr/bin/cat) 00:02:03.056 Compiler for C supports arguments -march=native: YES 00:02:03.056 Checking for size of "void *" : 8 00:02:03.056 Checking for size of "void *" : 8 (cached) 00:02:03.056 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:03.056 Library m found: YES 00:02:03.056 Library numa found: YES 00:02:03.056 Has header "numaif.h" : YES 00:02:03.056 Library fdt found: NO 00:02:03.056 Library execinfo found: NO 00:02:03.056 Has header "execinfo.h" : YES 00:02:03.056 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.056 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.056 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.056 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.056 Run-time dependency openssl found: YES 3.1.1 00:02:03.056 Run-time dependency libpcap found: YES 1.10.4 00:02:03.056 Has header "pcap.h" with dependency libpcap: YES 00:02:03.056 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.056 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.056 Compiler for C supports arguments -Wformat: YES 00:02:03.056 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.056 Compiler for C supports arguments -Wformat-security: NO 00:02:03.056 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.056 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.056 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.056 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.056 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.056 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.056 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.056 Compiler for C supports arguments -Wundef: YES 00:02:03.056 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.056 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.056 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.056 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.056 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.056 Program objdump found: YES (/usr/bin/objdump) 00:02:03.056 Compiler for C supports arguments -mavx512f: YES 00:02:03.056 Checking if "AVX512 checking" compiles: YES 00:02:03.056 Fetching value of define "__SSE4_2__" : 1 00:02:03.056 Fetching value of define "__AES__" : 1 00:02:03.056 Fetching value of define "__AVX__" : 1 00:02:03.056 Fetching value of define "__AVX2__" : 1 00:02:03.056 Fetching value of define "__AVX512BW__" : 1 00:02:03.056 Fetching value of define "__AVX512CD__" : 1 00:02:03.056 Fetching value of define "__AVX512DQ__" : 1 00:02:03.056 Fetching value of define "__AVX512F__" : 1 00:02:03.056 Fetching value of define "__AVX512VL__" : 1 00:02:03.056 Fetching value of define "__PCLMUL__" : 1 00:02:03.056 Fetching value of define "__RDRND__" : 1 00:02:03.056 Fetching value of define "__RDSEED__" : 1 00:02:03.056 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.056 Fetching value of define "__znver1__" : (undefined) 00:02:03.056 Fetching value of define "__znver2__" : (undefined) 00:02:03.056 Fetching value of define "__znver3__" : (undefined) 00:02:03.056 Fetching value of define "__znver4__" : (undefined) 00:02:03.056 Library asan found: YES 00:02:03.056 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.056 Message: lib/log: Defining dependency "log" 00:02:03.056 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.056 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.056 Library rt found: YES 00:02:03.056 Checking for function "getentropy" : NO 00:02:03.056 Message: lib/eal: Defining dependency "eal" 00:02:03.056 Message: lib/ring: Defining dependency "ring" 00:02:03.056 Message: lib/rcu: Defining dependency "rcu" 00:02:03.056 Message: lib/mempool: Defining dependency "mempool" 00:02:03.056 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.056 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.056 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.056 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.056 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.056 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.056 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:03.056 Compiler for C supports arguments -mpclmul: YES 00:02:03.056 Compiler for C supports arguments -maes: YES 00:02:03.056 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.056 Compiler for C supports arguments -mavx512bw: YES 00:02:03.056 Compiler for C supports arguments -mavx512dq: YES 00:02:03.056 Compiler for C supports arguments -mavx512vl: YES 00:02:03.056 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.056 Compiler for C supports arguments -mavx2: YES 00:02:03.056 Compiler for C supports arguments -mavx: YES 00:02:03.056 Message: lib/net: Defining dependency "net" 00:02:03.056 Message: lib/meter: Defining dependency "meter" 00:02:03.056 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.056 Message: lib/pci: Defining dependency "pci" 00:02:03.056 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.056 Message: lib/hash: Defining dependency "hash" 00:02:03.056 Message: lib/timer: Defining dependency "timer" 00:02:03.056 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.056 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.056 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.056 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.056 Message: lib/power: Defining dependency "power" 00:02:03.056 Message: lib/reorder: Defining dependency "reorder" 00:02:03.056 Message: lib/security: Defining dependency "security" 00:02:03.056 Has header "linux/userfaultfd.h" : YES 00:02:03.056 Has header "linux/vduse.h" : YES 00:02:03.056 Message: lib/vhost: Defining dependency "vhost" 00:02:03.056 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.056 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.056 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.056 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.056 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.056 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.056 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.056 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.056 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.056 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.056 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.056 Configuring doxy-api-html.conf using configuration 00:02:03.056 Configuring doxy-api-man.conf using configuration 00:02:03.056 Program mandb found: YES (/usr/bin/mandb) 00:02:03.056 Program sphinx-build found: NO 00:02:03.056 Configuring rte_build_config.h using configuration 00:02:03.056 Message: 00:02:03.056 ================= 00:02:03.056 Applications Enabled 00:02:03.056 ================= 00:02:03.056 00:02:03.056 apps: 00:02:03.056 00:02:03.056 00:02:03.056 Message: 00:02:03.056 ================= 00:02:03.056 Libraries Enabled 00:02:03.056 ================= 00:02:03.056 00:02:03.056 libs: 00:02:03.056 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.056 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.056 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.056 00:02:03.056 Message: 00:02:03.056 =============== 00:02:03.056 Drivers Enabled 00:02:03.056 =============== 00:02:03.056 00:02:03.056 common: 00:02:03.056 00:02:03.056 bus: 00:02:03.056 pci, vdev, 00:02:03.056 mempool: 00:02:03.056 ring, 00:02:03.056 dma: 00:02:03.056 00:02:03.056 net: 00:02:03.056 00:02:03.056 crypto: 00:02:03.056 00:02:03.056 compress: 00:02:03.056 00:02:03.056 vdpa: 00:02:03.056 00:02:03.056 00:02:03.056 Message: 00:02:03.056 ================= 00:02:03.056 Content Skipped 00:02:03.056 ================= 00:02:03.056 00:02:03.056 apps: 00:02:03.056 dumpcap: explicitly disabled via build config 00:02:03.056 graph: explicitly disabled via build config 00:02:03.056 pdump: explicitly disabled via build config 00:02:03.056 proc-info: explicitly disabled via build config 00:02:03.056 test-acl: explicitly disabled via build config 00:02:03.056 test-bbdev: explicitly disabled via build config 00:02:03.056 test-cmdline: explicitly disabled via build config 00:02:03.056 test-compress-perf: explicitly disabled via build config 00:02:03.056 test-crypto-perf: explicitly disabled via build config 00:02:03.057 test-dma-perf: explicitly disabled via build config 00:02:03.057 test-eventdev: explicitly disabled via build config 00:02:03.057 test-fib: explicitly disabled via build config 00:02:03.057 test-flow-perf: explicitly disabled via build config 00:02:03.057 test-gpudev: explicitly disabled via build config 00:02:03.057 test-mldev: explicitly disabled via build config 00:02:03.057 test-pipeline: explicitly disabled via build config 00:02:03.057 test-pmd: explicitly disabled via build config 00:02:03.057 test-regex: explicitly disabled via build config 00:02:03.057 test-sad: explicitly disabled via build config 00:02:03.057 test-security-perf: explicitly disabled via build config 00:02:03.057 00:02:03.057 libs: 00:02:03.057 argparse: explicitly disabled via build config 00:02:03.057 metrics: explicitly disabled via build config 00:02:03.057 acl: explicitly disabled via build config 00:02:03.057 bbdev: explicitly disabled via build config 00:02:03.057 bitratestats: explicitly disabled via build config 00:02:03.057 bpf: explicitly disabled via build config 00:02:03.057 cfgfile: explicitly disabled via build config 00:02:03.057 distributor: explicitly disabled via build config 00:02:03.057 efd: explicitly disabled via build config 00:02:03.057 eventdev: explicitly disabled via build config 00:02:03.057 dispatcher: explicitly disabled via build config 00:02:03.057 gpudev: explicitly disabled via build config 00:02:03.057 gro: explicitly disabled via build config 00:02:03.057 gso: explicitly disabled via build config 00:02:03.057 ip_frag: explicitly disabled via build config 00:02:03.057 jobstats: explicitly disabled via build config 00:02:03.057 latencystats: explicitly disabled via build config 00:02:03.057 lpm: explicitly disabled via build config 00:02:03.057 member: explicitly disabled via build config 00:02:03.057 pcapng: explicitly disabled via build config 00:02:03.057 rawdev: explicitly disabled via build config 00:02:03.057 regexdev: explicitly disabled via build config 00:02:03.057 mldev: explicitly disabled via build config 00:02:03.057 rib: explicitly disabled via build config 00:02:03.057 sched: explicitly disabled via build config 00:02:03.057 stack: explicitly disabled via build config 00:02:03.057 ipsec: explicitly disabled via build config 00:02:03.057 pdcp: explicitly disabled via build config 00:02:03.057 fib: explicitly disabled via build config 00:02:03.057 port: explicitly disabled via build config 00:02:03.057 pdump: explicitly disabled via build config 00:02:03.057 table: explicitly disabled via build config 00:02:03.057 pipeline: explicitly disabled via build config 00:02:03.057 graph: explicitly disabled via build config 00:02:03.057 node: explicitly disabled via build config 00:02:03.057 00:02:03.057 drivers: 00:02:03.057 common/cpt: not in enabled drivers build config 00:02:03.057 common/dpaax: not in enabled drivers build config 00:02:03.057 common/iavf: not in enabled drivers build config 00:02:03.057 common/idpf: not in enabled drivers build config 00:02:03.057 common/ionic: not in enabled drivers build config 00:02:03.057 common/mvep: not in enabled drivers build config 00:02:03.057 common/octeontx: not in enabled drivers build config 00:02:03.057 bus/auxiliary: not in enabled drivers build config 00:02:03.057 bus/cdx: not in enabled drivers build config 00:02:03.057 bus/dpaa: not in enabled drivers build config 00:02:03.057 bus/fslmc: not in enabled drivers build config 00:02:03.057 bus/ifpga: not in enabled drivers build config 00:02:03.057 bus/platform: not in enabled drivers build config 00:02:03.057 bus/uacce: not in enabled drivers build config 00:02:03.057 bus/vmbus: not in enabled drivers build config 00:02:03.057 common/cnxk: not in enabled drivers build config 00:02:03.057 common/mlx5: not in enabled drivers build config 00:02:03.057 common/nfp: not in enabled drivers build config 00:02:03.057 common/nitrox: not in enabled drivers build config 00:02:03.057 common/qat: not in enabled drivers build config 00:02:03.057 common/sfc_efx: not in enabled drivers build config 00:02:03.057 mempool/bucket: not in enabled drivers build config 00:02:03.057 mempool/cnxk: not in enabled drivers build config 00:02:03.057 mempool/dpaa: not in enabled drivers build config 00:02:03.057 mempool/dpaa2: not in enabled drivers build config 00:02:03.057 mempool/octeontx: not in enabled drivers build config 00:02:03.057 mempool/stack: not in enabled drivers build config 00:02:03.057 dma/cnxk: not in enabled drivers build config 00:02:03.057 dma/dpaa: not in enabled drivers build config 00:02:03.057 dma/dpaa2: not in enabled drivers build config 00:02:03.057 dma/hisilicon: not in enabled drivers build config 00:02:03.057 dma/idxd: not in enabled drivers build config 00:02:03.057 dma/ioat: not in enabled drivers build config 00:02:03.057 dma/skeleton: not in enabled drivers build config 00:02:03.057 net/af_packet: not in enabled drivers build config 00:02:03.057 net/af_xdp: not in enabled drivers build config 00:02:03.057 net/ark: not in enabled drivers build config 00:02:03.057 net/atlantic: not in enabled drivers build config 00:02:03.057 net/avp: not in enabled drivers build config 00:02:03.057 net/axgbe: not in enabled drivers build config 00:02:03.057 net/bnx2x: not in enabled drivers build config 00:02:03.057 net/bnxt: not in enabled drivers build config 00:02:03.057 net/bonding: not in enabled drivers build config 00:02:03.057 net/cnxk: not in enabled drivers build config 00:02:03.057 net/cpfl: not in enabled drivers build config 00:02:03.057 net/cxgbe: not in enabled drivers build config 00:02:03.057 net/dpaa: not in enabled drivers build config 00:02:03.057 net/dpaa2: not in enabled drivers build config 00:02:03.057 net/e1000: not in enabled drivers build config 00:02:03.057 net/ena: not in enabled drivers build config 00:02:03.057 net/enetc: not in enabled drivers build config 00:02:03.057 net/enetfec: not in enabled drivers build config 00:02:03.057 net/enic: not in enabled drivers build config 00:02:03.057 net/failsafe: not in enabled drivers build config 00:02:03.057 net/fm10k: not in enabled drivers build config 00:02:03.057 net/gve: not in enabled drivers build config 00:02:03.057 net/hinic: not in enabled drivers build config 00:02:03.057 net/hns3: not in enabled drivers build config 00:02:03.057 net/i40e: not in enabled drivers build config 00:02:03.057 net/iavf: not in enabled drivers build config 00:02:03.057 net/ice: not in enabled drivers build config 00:02:03.057 net/idpf: not in enabled drivers build config 00:02:03.057 net/igc: not in enabled drivers build config 00:02:03.057 net/ionic: not in enabled drivers build config 00:02:03.057 net/ipn3ke: not in enabled drivers build config 00:02:03.057 net/ixgbe: not in enabled drivers build config 00:02:03.057 net/mana: not in enabled drivers build config 00:02:03.057 net/memif: not in enabled drivers build config 00:02:03.057 net/mlx4: not in enabled drivers build config 00:02:03.057 net/mlx5: not in enabled drivers build config 00:02:03.057 net/mvneta: not in enabled drivers build config 00:02:03.057 net/mvpp2: not in enabled drivers build config 00:02:03.057 net/netvsc: not in enabled drivers build config 00:02:03.057 net/nfb: not in enabled drivers build config 00:02:03.057 net/nfp: not in enabled drivers build config 00:02:03.057 net/ngbe: not in enabled drivers build config 00:02:03.057 net/null: not in enabled drivers build config 00:02:03.057 net/octeontx: not in enabled drivers build config 00:02:03.057 net/octeon_ep: not in enabled drivers build config 00:02:03.057 net/pcap: not in enabled drivers build config 00:02:03.057 net/pfe: not in enabled drivers build config 00:02:03.057 net/qede: not in enabled drivers build config 00:02:03.057 net/ring: not in enabled drivers build config 00:02:03.057 net/sfc: not in enabled drivers build config 00:02:03.057 net/softnic: not in enabled drivers build config 00:02:03.057 net/tap: not in enabled drivers build config 00:02:03.057 net/thunderx: not in enabled drivers build config 00:02:03.057 net/txgbe: not in enabled drivers build config 00:02:03.057 net/vdev_netvsc: not in enabled drivers build config 00:02:03.057 net/vhost: not in enabled drivers build config 00:02:03.057 net/virtio: not in enabled drivers build config 00:02:03.057 net/vmxnet3: not in enabled drivers build config 00:02:03.057 raw/*: missing internal dependency, "rawdev" 00:02:03.057 crypto/armv8: not in enabled drivers build config 00:02:03.057 crypto/bcmfs: not in enabled drivers build config 00:02:03.057 crypto/caam_jr: not in enabled drivers build config 00:02:03.057 crypto/ccp: not in enabled drivers build config 00:02:03.057 crypto/cnxk: not in enabled drivers build config 00:02:03.057 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.057 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.057 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.057 crypto/mlx5: not in enabled drivers build config 00:02:03.057 crypto/mvsam: not in enabled drivers build config 00:02:03.057 crypto/nitrox: not in enabled drivers build config 00:02:03.057 crypto/null: not in enabled drivers build config 00:02:03.057 crypto/octeontx: not in enabled drivers build config 00:02:03.057 crypto/openssl: not in enabled drivers build config 00:02:03.057 crypto/scheduler: not in enabled drivers build config 00:02:03.057 crypto/uadk: not in enabled drivers build config 00:02:03.057 crypto/virtio: not in enabled drivers build config 00:02:03.057 compress/isal: not in enabled drivers build config 00:02:03.057 compress/mlx5: not in enabled drivers build config 00:02:03.057 compress/nitrox: not in enabled drivers build config 00:02:03.057 compress/octeontx: not in enabled drivers build config 00:02:03.057 compress/zlib: not in enabled drivers build config 00:02:03.057 regex/*: missing internal dependency, "regexdev" 00:02:03.057 ml/*: missing internal dependency, "mldev" 00:02:03.057 vdpa/ifc: not in enabled drivers build config 00:02:03.057 vdpa/mlx5: not in enabled drivers build config 00:02:03.057 vdpa/nfp: not in enabled drivers build config 00:02:03.057 vdpa/sfc: not in enabled drivers build config 00:02:03.057 event/*: missing internal dependency, "eventdev" 00:02:03.057 baseband/*: missing internal dependency, "bbdev" 00:02:03.057 gpu/*: missing internal dependency, "gpudev" 00:02:03.057 00:02:03.057 00:02:03.057 Build targets in project: 85 00:02:03.057 00:02:03.057 DPDK 24.03.0 00:02:03.057 00:02:03.057 User defined options 00:02:03.057 buildtype : debug 00:02:03.057 default_library : shared 00:02:03.057 libdir : lib 00:02:03.057 prefix : /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:02:03.057 b_sanitize : address 00:02:03.057 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.057 c_link_args : 00:02:03.057 cpu_instruction_set: native 00:02:03.057 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:03.057 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:03.057 enable_docs : false 00:02:03.057 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:03.057 enable_kmods : false 00:02:03.057 max_lcores : 128 00:02:03.058 tests : false 00:02:03.058 00:02:03.058 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.058 ninja: Entering directory `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp' 00:02:03.058 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.058 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.058 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.058 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.058 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.058 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.058 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.058 [8/268] Linking static target lib/librte_kvargs.a 00:02:03.058 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.058 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.058 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.058 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.058 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.058 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.058 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.058 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.058 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.058 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.058 [19/268] Linking static target lib/librte_log.a 00:02:03.058 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.058 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.058 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.058 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.058 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.058 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.058 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.058 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.058 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.058 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.058 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.058 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.058 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.058 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.058 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.058 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.321 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.321 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.321 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.321 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.321 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.321 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.321 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.321 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.321 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.321 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.321 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.321 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.321 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.321 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.321 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.321 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.321 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.321 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.321 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.321 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.321 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.321 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.321 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.321 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.321 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.321 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.321 [62/268] Linking static target lib/librte_telemetry.a 00:02:03.321 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.321 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.321 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.321 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.321 [67/268] Linking static target lib/librte_ring.a 00:02:03.321 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.321 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.321 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.321 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.321 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.321 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.321 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.321 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.321 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.321 [77/268] Linking static target lib/librte_pci.a 00:02:03.321 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.321 [79/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.321 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.321 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.321 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.321 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.321 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.321 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.321 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.321 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.321 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.321 [89/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.321 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.321 [91/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.321 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.321 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.583 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.583 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.583 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.583 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.583 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.583 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.583 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.583 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.583 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.583 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.583 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.583 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.583 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.583 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.583 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.583 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.583 [110/268] Linking static target lib/librte_mempool.a 00:02:03.583 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.583 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.583 [113/268] Linking static target lib/librte_meter.a 00:02:03.583 [114/268] Linking static target lib/librte_net.a 00:02:03.843 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.843 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.843 [117/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.843 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.843 [119/268] Linking static target lib/librte_rcu.a 00:02:03.843 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.843 [121/268] Linking static target lib/librte_eal.a 00:02:03.843 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.843 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.843 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.843 [125/268] Linking target lib/librte_log.so.24.1 00:02:03.843 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.843 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.843 [128/268] Linking static target lib/librte_cmdline.a 00:02:03.843 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.843 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.843 [131/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.843 [132/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.843 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.843 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.843 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.843 [136/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.843 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.843 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.843 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.843 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.843 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.843 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.843 [143/268] Linking static target lib/librte_timer.a 00:02:03.843 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.843 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.843 [146/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.843 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.843 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.843 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.843 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.100 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:04.100 [152/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.100 [153/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.100 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.100 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.100 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.100 [157/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.100 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.100 [159/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.100 [160/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.100 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:04.100 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.100 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.100 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.100 [165/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.100 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.100 [167/268] Linking static target lib/librte_compressdev.a 00:02:04.100 [168/268] Linking static target lib/librte_power.a 00:02:04.100 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.100 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.100 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.100 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.100 [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.100 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.100 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.100 [176/268] Linking static target lib/librte_dmadev.a 00:02:04.100 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:04.100 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.100 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.100 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.100 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.100 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.100 [183/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.100 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.100 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.100 [186/268] Linking static target lib/librte_reorder.a 00:02:04.100 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.356 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.356 [189/268] Linking static target lib/librte_security.a 00:02:04.356 [190/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.356 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.356 [192/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.356 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.356 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.356 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.356 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:04.356 [197/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.356 [198/268] Linking static target lib/librte_mbuf.a 00:02:04.356 [199/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.356 [200/268] Linking static target drivers/librte_bus_pci.a 00:02:04.356 [201/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.356 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.356 [203/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.356 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.614 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.614 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.614 [207/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.614 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.614 [209/268] Linking static target lib/librte_hash.a 00:02:04.614 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.614 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.614 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.614 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.914 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.914 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.914 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.914 [217/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.914 [218/268] Linking static target lib/librte_cryptodev.a 00:02:04.914 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.914 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.914 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.172 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.172 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.739 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.739 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.739 [226/268] Linking static target lib/librte_ethdev.a 00:02:06.673 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.932 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.220 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.220 [230/268] Linking static target lib/librte_vhost.a 00:02:11.596 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.876 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.441 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.441 [234/268] Linking target lib/librte_eal.so.24.1 00:02:15.441 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:15.698 [236/268] Linking target lib/librte_ring.so.24.1 00:02:15.698 [237/268] Linking target lib/librte_timer.so.24.1 00:02:15.698 [238/268] Linking target lib/librte_meter.so.24.1 00:02:15.698 [239/268] Linking target lib/librte_pci.so.24.1 00:02:15.698 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:15.698 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:15.698 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:15.698 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:15.698 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:15.698 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:15.698 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:15.698 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:15.698 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:15.698 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:15.956 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:15.956 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:15.956 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:15.956 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.956 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.956 [255/268] Linking target lib/librte_net.so.24.1 00:02:15.956 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:16.213 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:16.213 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:16.213 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:16.213 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:16.213 [261/268] Linking target lib/librte_hash.so.24.1 00:02:16.213 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:16.213 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.213 [264/268] Linking target lib/librte_security.so.24.1 00:02:16.473 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:16.473 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.473 [267/268] Linking target lib/librte_power.so.24.1 00:02:16.473 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:16.473 INFO: autodetecting backend as ninja 00:02:16.473 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:26.441 CC lib/ut_mock/mock.o 00:02:26.441 CC lib/ut/ut.o 00:02:26.441 CC lib/log/log.o 00:02:26.441 CC lib/log/log_flags.o 00:02:26.441 CC lib/log/log_deprecated.o 00:02:26.441 LIB libspdk_ut.a 00:02:26.441 LIB libspdk_ut_mock.a 00:02:26.441 SO libspdk_ut_mock.so.6.0 00:02:26.441 LIB libspdk_log.a 00:02:26.441 SO libspdk_ut.so.2.0 00:02:26.441 SO libspdk_log.so.7.0 00:02:26.441 SYMLINK libspdk_ut_mock.so 00:02:26.441 SYMLINK libspdk_ut.so 00:02:26.441 SYMLINK libspdk_log.so 00:02:26.441 CC lib/ioat/ioat.o 00:02:26.441 CC lib/util/bit_array.o 00:02:26.441 CC lib/util/base64.o 00:02:26.441 CC lib/util/crc16.o 00:02:26.441 CC lib/util/cpuset.o 00:02:26.441 CC lib/util/crc32c.o 00:02:26.441 CC lib/util/crc32.o 00:02:26.441 CC lib/util/crc32_ieee.o 00:02:26.441 CC lib/dma/dma.o 00:02:26.441 CC lib/util/dif.o 00:02:26.441 CC lib/util/crc64.o 00:02:26.441 CC lib/util/fd.o 00:02:26.441 CXX lib/trace_parser/trace.o 00:02:26.441 CC lib/util/fd_group.o 00:02:26.441 CC lib/util/file.o 00:02:26.441 CC lib/util/hexlify.o 00:02:26.441 CC lib/util/iov.o 00:02:26.441 CC lib/util/net.o 00:02:26.441 CC lib/util/math.o 00:02:26.441 CC lib/util/pipe.o 00:02:26.442 CC lib/util/strerror_tls.o 00:02:26.442 CC lib/util/string.o 00:02:26.442 CC lib/util/uuid.o 00:02:26.442 CC lib/util/xor.o 00:02:26.442 CC lib/util/zipf.o 00:02:26.442 CC lib/util/md5.o 00:02:26.442 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.442 CC lib/vfio_user/host/vfio_user.o 00:02:26.442 LIB libspdk_dma.a 00:02:26.442 SO libspdk_dma.so.5.0 00:02:26.442 LIB libspdk_ioat.a 00:02:26.442 SYMLINK libspdk_dma.so 00:02:26.442 SO libspdk_ioat.so.7.0 00:02:26.442 SYMLINK libspdk_ioat.so 00:02:26.699 LIB libspdk_vfio_user.a 00:02:26.699 SO libspdk_vfio_user.so.5.0 00:02:26.699 SYMLINK libspdk_vfio_user.so 00:02:26.699 LIB libspdk_util.a 00:02:26.699 SO libspdk_util.so.10.0 00:02:26.956 SYMLINK libspdk_util.so 00:02:26.956 LIB libspdk_trace_parser.a 00:02:26.956 SO libspdk_trace_parser.so.6.0 00:02:27.215 SYMLINK libspdk_trace_parser.so 00:02:27.215 CC lib/idxd/idxd.o 00:02:27.215 CC lib/rdma_utils/rdma_utils.o 00:02:27.215 CC lib/idxd/idxd_user.o 00:02:27.215 CC lib/idxd/idxd_kernel.o 00:02:27.215 CC lib/vmd/vmd.o 00:02:27.215 CC lib/json/json_parse.o 00:02:27.215 CC lib/env_dpdk/pci.o 00:02:27.215 CC lib/env_dpdk/env.o 00:02:27.215 CC lib/env_dpdk/memory.o 00:02:27.215 CC lib/vmd/led.o 00:02:27.215 CC lib/json/json_util.o 00:02:27.215 CC lib/env_dpdk/threads.o 00:02:27.215 CC lib/json/json_write.o 00:02:27.215 CC lib/env_dpdk/init.o 00:02:27.215 CC lib/rdma_provider/common.o 00:02:27.215 CC lib/env_dpdk/pci_ioat.o 00:02:27.215 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:27.215 CC lib/env_dpdk/pci_virtio.o 00:02:27.215 CC lib/env_dpdk/pci_vmd.o 00:02:27.215 CC lib/env_dpdk/pci_event.o 00:02:27.215 CC lib/env_dpdk/pci_idxd.o 00:02:27.215 CC lib/conf/conf.o 00:02:27.215 CC lib/env_dpdk/sigbus_handler.o 00:02:27.215 CC lib/env_dpdk/pci_dpdk.o 00:02:27.215 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:27.215 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.474 LIB libspdk_rdma_provider.a 00:02:27.474 SO libspdk_rdma_provider.so.6.0 00:02:27.474 LIB libspdk_conf.a 00:02:27.474 SO libspdk_conf.so.6.0 00:02:27.474 LIB libspdk_rdma_utils.a 00:02:27.474 SYMLINK libspdk_rdma_provider.so 00:02:27.732 SO libspdk_rdma_utils.so.1.0 00:02:27.732 LIB libspdk_json.a 00:02:27.732 SYMLINK libspdk_conf.so 00:02:27.732 SO libspdk_json.so.6.0 00:02:27.732 SYMLINK libspdk_rdma_utils.so 00:02:27.732 SYMLINK libspdk_json.so 00:02:27.991 LIB libspdk_idxd.a 00:02:27.991 LIB libspdk_vmd.a 00:02:27.991 SO libspdk_vmd.so.6.0 00:02:27.991 SO libspdk_idxd.so.12.1 00:02:27.991 SYMLINK libspdk_idxd.so 00:02:27.991 SYMLINK libspdk_vmd.so 00:02:27.991 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.991 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.991 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.991 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.250 LIB libspdk_jsonrpc.a 00:02:28.250 SO libspdk_jsonrpc.so.6.0 00:02:28.508 SYMLINK libspdk_jsonrpc.so 00:02:28.767 LIB libspdk_env_dpdk.a 00:02:28.767 SO libspdk_env_dpdk.so.15.0 00:02:28.767 CC lib/rpc/rpc.o 00:02:28.767 SYMLINK libspdk_env_dpdk.so 00:02:29.024 LIB libspdk_rpc.a 00:02:29.024 SO libspdk_rpc.so.6.0 00:02:29.024 SYMLINK libspdk_rpc.so 00:02:29.282 CC lib/trace/trace.o 00:02:29.282 CC lib/trace/trace_flags.o 00:02:29.540 CC lib/trace/trace_rpc.o 00:02:29.540 CC lib/notify/notify_rpc.o 00:02:29.540 CC lib/notify/notify.o 00:02:29.540 CC lib/keyring/keyring.o 00:02:29.540 CC lib/keyring/keyring_rpc.o 00:02:29.540 LIB libspdk_notify.a 00:02:29.540 SO libspdk_notify.so.6.0 00:02:29.540 LIB libspdk_keyring.a 00:02:29.798 LIB libspdk_trace.a 00:02:29.798 SO libspdk_keyring.so.2.0 00:02:29.798 SYMLINK libspdk_notify.so 00:02:29.798 SO libspdk_trace.so.11.0 00:02:29.798 SYMLINK libspdk_keyring.so 00:02:29.798 SYMLINK libspdk_trace.so 00:02:30.056 CC lib/thread/thread.o 00:02:30.056 CC lib/thread/iobuf.o 00:02:30.056 CC lib/sock/sock.o 00:02:30.056 CC lib/sock/sock_rpc.o 00:02:30.621 LIB libspdk_sock.a 00:02:30.621 SO libspdk_sock.so.10.0 00:02:30.621 SYMLINK libspdk_sock.so 00:02:30.879 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:30.879 CC lib/nvme/nvme_ctrlr.o 00:02:30.879 CC lib/nvme/nvme_fabric.o 00:02:30.879 CC lib/nvme/nvme_ns_cmd.o 00:02:30.879 CC lib/nvme/nvme_ns.o 00:02:30.879 CC lib/nvme/nvme_pcie_common.o 00:02:30.879 CC lib/nvme/nvme_pcie.o 00:02:30.879 CC lib/nvme/nvme_qpair.o 00:02:30.879 CC lib/nvme/nvme_transport.o 00:02:30.879 CC lib/nvme/nvme.o 00:02:30.879 CC lib/nvme/nvme_quirks.o 00:02:30.879 CC lib/nvme/nvme_discovery.o 00:02:30.879 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:30.879 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:30.879 CC lib/nvme/nvme_tcp.o 00:02:30.879 CC lib/nvme/nvme_opal.o 00:02:30.879 CC lib/nvme/nvme_io_msg.o 00:02:30.879 CC lib/nvme/nvme_poll_group.o 00:02:30.879 CC lib/nvme/nvme_zns.o 00:02:30.879 CC lib/nvme/nvme_stubs.o 00:02:30.879 CC lib/nvme/nvme_auth.o 00:02:30.879 CC lib/nvme/nvme_cuse.o 00:02:30.879 CC lib/nvme/nvme_rdma.o 00:02:31.811 LIB libspdk_thread.a 00:02:31.811 SO libspdk_thread.so.10.2 00:02:31.811 SYMLINK libspdk_thread.so 00:02:32.069 CC lib/fsdev/fsdev.o 00:02:32.069 CC lib/fsdev/fsdev_rpc.o 00:02:32.069 CC lib/fsdev/fsdev_io.o 00:02:32.069 CC lib/init/json_config.o 00:02:32.069 CC lib/init/subsystem.o 00:02:32.069 CC lib/blob/blobstore.o 00:02:32.069 CC lib/init/subsystem_rpc.o 00:02:32.069 CC lib/blob/request.o 00:02:32.069 CC lib/blob/zeroes.o 00:02:32.069 CC lib/blob/blob_bs_dev.o 00:02:32.069 CC lib/init/rpc.o 00:02:32.069 CC lib/accel/accel_rpc.o 00:02:32.069 CC lib/accel/accel.o 00:02:32.069 CC lib/virtio/virtio.o 00:02:32.069 CC lib/accel/accel_sw.o 00:02:32.069 CC lib/virtio/virtio_vhost_user.o 00:02:32.069 CC lib/virtio/virtio_vfio_user.o 00:02:32.069 CC lib/virtio/virtio_pci.o 00:02:32.327 LIB libspdk_init.a 00:02:32.327 SO libspdk_init.so.6.0 00:02:32.327 LIB libspdk_virtio.a 00:02:32.327 SYMLINK libspdk_init.so 00:02:32.586 SO libspdk_virtio.so.7.0 00:02:32.586 SYMLINK libspdk_virtio.so 00:02:32.586 LIB libspdk_fsdev.a 00:02:32.843 SO libspdk_fsdev.so.1.0 00:02:32.843 CC lib/event/log_rpc.o 00:02:32.843 CC lib/event/app.o 00:02:32.843 CC lib/event/scheduler_static.o 00:02:32.843 CC lib/event/reactor.o 00:02:32.843 CC lib/event/app_rpc.o 00:02:32.843 SYMLINK libspdk_fsdev.so 00:02:33.101 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:33.101 LIB libspdk_nvme.a 00:02:33.101 LIB libspdk_accel.a 00:02:33.101 SO libspdk_accel.so.16.0 00:02:33.101 LIB libspdk_event.a 00:02:33.359 SYMLINK libspdk_accel.so 00:02:33.359 SO libspdk_nvme.so.14.0 00:02:33.359 SO libspdk_event.so.15.0 00:02:33.359 SYMLINK libspdk_event.so 00:02:33.616 SYMLINK libspdk_nvme.so 00:02:33.616 CC lib/bdev/bdev.o 00:02:33.616 CC lib/bdev/bdev_rpc.o 00:02:33.616 CC lib/bdev/bdev_zone.o 00:02:33.616 CC lib/bdev/part.o 00:02:33.616 CC lib/bdev/scsi_nvme.o 00:02:33.616 LIB libspdk_fuse_dispatcher.a 00:02:33.873 SO libspdk_fuse_dispatcher.so.1.0 00:02:33.873 SYMLINK libspdk_fuse_dispatcher.so 00:02:35.245 LIB libspdk_blob.a 00:02:35.245 SO libspdk_blob.so.11.0 00:02:35.245 SYMLINK libspdk_blob.so 00:02:35.812 CC lib/blobfs/blobfs.o 00:02:35.812 CC lib/blobfs/tree.o 00:02:35.812 CC lib/lvol/lvol.o 00:02:36.070 LIB libspdk_bdev.a 00:02:36.070 SO libspdk_bdev.so.17.0 00:02:36.326 SYMLINK libspdk_bdev.so 00:02:36.326 LIB libspdk_blobfs.a 00:02:36.326 SO libspdk_blobfs.so.10.0 00:02:36.590 LIB libspdk_lvol.a 00:02:36.590 SYMLINK libspdk_blobfs.so 00:02:36.590 CC lib/ftl/ftl_core.o 00:02:36.590 CC lib/scsi/dev.o 00:02:36.590 CC lib/nbd/nbd_rpc.o 00:02:36.590 CC lib/nbd/nbd.o 00:02:36.590 CC lib/scsi/lun.o 00:02:36.590 SO libspdk_lvol.so.10.0 00:02:36.590 CC lib/ftl/ftl_init.o 00:02:36.590 CC lib/ftl/ftl_io.o 00:02:36.590 CC lib/scsi/port.o 00:02:36.590 CC lib/scsi/scsi_bdev.o 00:02:36.590 CC lib/ftl/ftl_layout.o 00:02:36.590 CC lib/ftl/ftl_sb.o 00:02:36.590 CC lib/scsi/scsi.o 00:02:36.590 CC lib/ftl/ftl_debug.o 00:02:36.590 CC lib/scsi/scsi_pr.o 00:02:36.590 CC lib/ftl/ftl_l2p.o 00:02:36.590 CC lib/scsi/scsi_rpc.o 00:02:36.590 CC lib/ftl/ftl_l2p_flat.o 00:02:36.590 CC lib/scsi/task.o 00:02:36.590 CC lib/ftl/ftl_band_ops.o 00:02:36.590 CC lib/ftl/ftl_nv_cache.o 00:02:36.590 CC lib/ublk/ublk.o 00:02:36.590 CC lib/ftl/ftl_band.o 00:02:36.590 CC lib/ublk/ublk_rpc.o 00:02:36.590 CC lib/nvmf/ctrlr.o 00:02:36.590 CC lib/ftl/ftl_rq.o 00:02:36.590 CC lib/ftl/ftl_writer.o 00:02:36.590 CC lib/ftl/ftl_l2p_cache.o 00:02:36.590 CC lib/ftl/ftl_p2l.o 00:02:36.590 CC lib/nvmf/subsystem.o 00:02:36.590 CC lib/nvmf/ctrlr_bdev.o 00:02:36.590 CC lib/nvmf/ctrlr_discovery.o 00:02:36.590 CC lib/ftl/ftl_reloc.o 00:02:36.590 CC lib/ftl/ftl_p2l_log.o 00:02:36.590 CC lib/nvmf/nvmf.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.590 CC lib/nvmf/nvmf_rpc.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.590 CC lib/nvmf/transport.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.590 CC lib/nvmf/tcp.o 00:02:36.590 CC lib/nvmf/stubs.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.590 CC lib/nvmf/mdns_server.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.590 CC lib/nvmf/rdma.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.590 CC lib/nvmf/auth.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.590 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.590 CC lib/ftl/utils/ftl_conf.o 00:02:36.590 CC lib/ftl/utils/ftl_mempool.o 00:02:36.590 CC lib/ftl/utils/ftl_md.o 00:02:36.590 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.590 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.590 CC lib/ftl/utils/ftl_property.o 00:02:36.590 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.590 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.590 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.590 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.590 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.590 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:36.590 SYMLINK libspdk_lvol.so 00:02:36.590 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:36.848 CC lib/ftl/base/ftl_base_dev.o 00:02:36.848 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.848 CC lib/ftl/ftl_trace.o 00:02:37.415 LIB libspdk_nbd.a 00:02:37.415 SO libspdk_nbd.so.7.0 00:02:37.415 SYMLINK libspdk_nbd.so 00:02:37.415 LIB libspdk_scsi.a 00:02:37.415 SO libspdk_scsi.so.9.0 00:02:37.415 LIB libspdk_ublk.a 00:02:37.673 SO libspdk_ublk.so.3.0 00:02:37.673 SYMLINK libspdk_scsi.so 00:02:37.673 SYMLINK libspdk_ublk.so 00:02:37.932 LIB libspdk_ftl.a 00:02:37.932 CC lib/iscsi/init_grp.o 00:02:37.932 CC lib/iscsi/conn.o 00:02:37.932 CC lib/iscsi/param.o 00:02:37.932 CC lib/iscsi/iscsi.o 00:02:37.932 CC lib/iscsi/tgt_node.o 00:02:37.932 CC lib/iscsi/portal_grp.o 00:02:37.932 CC lib/iscsi/iscsi_subsystem.o 00:02:37.932 CC lib/iscsi/iscsi_rpc.o 00:02:37.932 CC lib/iscsi/task.o 00:02:37.932 CC lib/vhost/vhost.o 00:02:37.932 CC lib/vhost/vhost_rpc.o 00:02:37.932 CC lib/vhost/vhost_blk.o 00:02:37.932 CC lib/vhost/vhost_scsi.o 00:02:37.932 CC lib/vhost/rte_vhost_user.o 00:02:37.932 SO libspdk_ftl.so.9.0 00:02:38.190 SYMLINK libspdk_ftl.so 00:02:38.758 LIB libspdk_vhost.a 00:02:38.758 SO libspdk_vhost.so.8.0 00:02:39.016 SYMLINK libspdk_vhost.so 00:02:39.016 LIB libspdk_nvmf.a 00:02:39.275 SO libspdk_nvmf.so.19.0 00:02:39.275 LIB libspdk_iscsi.a 00:02:39.275 SO libspdk_iscsi.so.8.0 00:02:39.275 SYMLINK libspdk_nvmf.so 00:02:39.534 SYMLINK libspdk_iscsi.so 00:02:39.792 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.051 CC module/blob/bdev/blob_bdev.o 00:02:40.051 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.051 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.051 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.051 CC module/sock/posix/posix.o 00:02:40.051 CC module/accel/dsa/accel_dsa.o 00:02:40.051 CC module/accel/iaa/accel_iaa.o 00:02:40.051 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.051 LIB libspdk_env_dpdk_rpc.a 00:02:40.051 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.051 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.051 CC module/keyring/file/keyring.o 00:02:40.051 CC module/accel/ioat/accel_ioat.o 00:02:40.051 CC module/keyring/file/keyring_rpc.o 00:02:40.051 CC module/keyring/linux/keyring.o 00:02:40.051 CC module/keyring/linux/keyring_rpc.o 00:02:40.051 CC module/fsdev/aio/fsdev_aio.o 00:02:40.051 CC module/fsdev/aio/linux_aio_mgr.o 00:02:40.051 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:40.051 CC module/accel/error/accel_error.o 00:02:40.051 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.051 CC module/accel/error/accel_error_rpc.o 00:02:40.051 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.310 LIB libspdk_scheduler_gscheduler.a 00:02:40.310 LIB libspdk_keyring_file.a 00:02:40.310 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.310 LIB libspdk_keyring_linux.a 00:02:40.310 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.310 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.310 SO libspdk_keyring_file.so.2.0 00:02:40.310 SO libspdk_keyring_linux.so.1.0 00:02:40.310 LIB libspdk_scheduler_dynamic.a 00:02:40.310 LIB libspdk_accel_iaa.a 00:02:40.310 LIB libspdk_accel_ioat.a 00:02:40.310 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.310 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.310 LIB libspdk_accel_error.a 00:02:40.310 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.310 SO libspdk_accel_iaa.so.3.0 00:02:40.310 LIB libspdk_blob_bdev.a 00:02:40.310 SO libspdk_accel_ioat.so.6.0 00:02:40.310 SYMLINK libspdk_keyring_linux.so 00:02:40.310 SYMLINK libspdk_keyring_file.so 00:02:40.310 SO libspdk_accel_error.so.2.0 00:02:40.310 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.310 SO libspdk_blob_bdev.so.11.0 00:02:40.310 LIB libspdk_accel_dsa.a 00:02:40.310 SYMLINK libspdk_accel_iaa.so 00:02:40.310 SO libspdk_accel_dsa.so.5.0 00:02:40.310 SYMLINK libspdk_accel_error.so 00:02:40.310 SYMLINK libspdk_accel_ioat.so 00:02:40.310 SYMLINK libspdk_blob_bdev.so 00:02:40.569 SYMLINK libspdk_accel_dsa.so 00:02:40.827 LIB libspdk_fsdev_aio.a 00:02:40.827 LIB libspdk_sock_posix.a 00:02:40.827 SO libspdk_fsdev_aio.so.1.0 00:02:40.827 SO libspdk_sock_posix.so.6.0 00:02:40.827 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.827 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.827 CC module/bdev/malloc/bdev_malloc.o 00:02:40.827 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.827 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.827 CC module/bdev/nvme/bdev_nvme.o 00:02:40.827 CC module/bdev/ftl/bdev_ftl.o 00:02:40.827 CC module/bdev/gpt/gpt.o 00:02:40.827 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.827 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.827 CC module/bdev/nvme/nvme_rpc.o 00:02:40.827 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.828 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.828 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.828 CC module/bdev/nvme/vbdev_opal.o 00:02:40.828 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.828 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.828 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.828 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.828 CC module/bdev/raid/bdev_raid.o 00:02:40.828 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.828 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.828 CC module/bdev/raid/raid0.o 00:02:40.828 CC module/bdev/raid/raid1.o 00:02:40.828 CC module/bdev/raid/concat.o 00:02:40.828 CC module/bdev/aio/bdev_aio.o 00:02:40.828 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.828 CC module/bdev/split/vbdev_split.o 00:02:40.828 CC module/bdev/null/bdev_null.o 00:02:40.828 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.828 CC module/bdev/null/bdev_null_rpc.o 00:02:40.828 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.828 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.828 CC module/bdev/delay/vbdev_delay.o 00:02:40.828 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.828 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.828 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.828 SYMLINK libspdk_fsdev_aio.so 00:02:40.828 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.828 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.828 CC module/bdev/error/vbdev_error.o 00:02:40.828 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.828 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.828 SYMLINK libspdk_sock_posix.so 00:02:41.086 LIB libspdk_blobfs_bdev.a 00:02:41.086 LIB libspdk_bdev_split.a 00:02:41.086 SO libspdk_blobfs_bdev.so.6.0 00:02:41.345 LIB libspdk_bdev_ftl.a 00:02:41.345 SO libspdk_bdev_split.so.6.0 00:02:41.345 LIB libspdk_bdev_gpt.a 00:02:41.345 LIB libspdk_bdev_passthru.a 00:02:41.345 SO libspdk_bdev_ftl.so.6.0 00:02:41.345 SO libspdk_bdev_gpt.so.6.0 00:02:41.345 SYMLINK libspdk_blobfs_bdev.so 00:02:41.345 SO libspdk_bdev_passthru.so.6.0 00:02:41.345 SYMLINK libspdk_bdev_split.so 00:02:41.345 LIB libspdk_bdev_malloc.a 00:02:41.345 LIB libspdk_bdev_zone_block.a 00:02:41.345 LIB libspdk_bdev_aio.a 00:02:41.345 SYMLINK libspdk_bdev_ftl.so 00:02:41.345 LIB libspdk_bdev_error.a 00:02:41.345 SYMLINK libspdk_bdev_gpt.so 00:02:41.345 LIB libspdk_bdev_null.a 00:02:41.345 LIB libspdk_bdev_delay.a 00:02:41.345 SO libspdk_bdev_malloc.so.6.0 00:02:41.345 SO libspdk_bdev_zone_block.so.6.0 00:02:41.345 SYMLINK libspdk_bdev_passthru.so 00:02:41.345 SO libspdk_bdev_error.so.6.0 00:02:41.345 SO libspdk_bdev_aio.so.6.0 00:02:41.345 SO libspdk_bdev_null.so.6.0 00:02:41.345 SO libspdk_bdev_delay.so.6.0 00:02:41.345 SYMLINK libspdk_bdev_malloc.so 00:02:41.345 SYMLINK libspdk_bdev_zone_block.so 00:02:41.345 SYMLINK libspdk_bdev_error.so 00:02:41.345 LIB libspdk_bdev_iscsi.a 00:02:41.345 SYMLINK libspdk_bdev_aio.so 00:02:41.345 SYMLINK libspdk_bdev_null.so 00:02:41.604 SYMLINK libspdk_bdev_delay.so 00:02:41.604 SO libspdk_bdev_iscsi.so.6.0 00:02:41.604 LIB libspdk_bdev_lvol.a 00:02:41.604 LIB libspdk_bdev_virtio.a 00:02:41.604 SO libspdk_bdev_lvol.so.6.0 00:02:41.604 SYMLINK libspdk_bdev_iscsi.so 00:02:41.604 SO libspdk_bdev_virtio.so.6.0 00:02:41.604 SYMLINK libspdk_bdev_lvol.so 00:02:41.604 SYMLINK libspdk_bdev_virtio.so 00:02:41.863 LIB libspdk_bdev_raid.a 00:02:42.146 SO libspdk_bdev_raid.so.6.0 00:02:42.147 SYMLINK libspdk_bdev_raid.so 00:02:43.203 LIB libspdk_bdev_nvme.a 00:02:43.203 SO libspdk_bdev_nvme.so.7.0 00:02:43.462 SYMLINK libspdk_bdev_nvme.so 00:02:44.030 CC module/event/subsystems/fsdev/fsdev.o 00:02:44.030 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.030 CC module/event/subsystems/vmd/vmd.o 00:02:44.030 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.030 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.030 CC module/event/subsystems/keyring/keyring.o 00:02:44.030 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.030 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.030 CC module/event/subsystems/sock/sock.o 00:02:44.030 LIB libspdk_event_keyring.a 00:02:44.030 LIB libspdk_event_iobuf.a 00:02:44.030 LIB libspdk_event_vhost_blk.a 00:02:44.030 LIB libspdk_event_fsdev.a 00:02:44.030 LIB libspdk_event_vmd.a 00:02:44.030 LIB libspdk_event_sock.a 00:02:44.030 SO libspdk_event_keyring.so.1.0 00:02:44.030 SO libspdk_event_iobuf.so.3.0 00:02:44.030 LIB libspdk_event_scheduler.a 00:02:44.030 SO libspdk_event_vhost_blk.so.3.0 00:02:44.030 SO libspdk_event_fsdev.so.1.0 00:02:44.030 SO libspdk_event_vmd.so.6.0 00:02:44.030 SO libspdk_event_scheduler.so.4.0 00:02:44.030 SO libspdk_event_sock.so.5.0 00:02:44.290 SYMLINK libspdk_event_keyring.so 00:02:44.290 SYMLINK libspdk_event_fsdev.so 00:02:44.290 SYMLINK libspdk_event_vhost_blk.so 00:02:44.290 SYMLINK libspdk_event_iobuf.so 00:02:44.290 SYMLINK libspdk_event_vmd.so 00:02:44.290 SYMLINK libspdk_event_scheduler.so 00:02:44.290 SYMLINK libspdk_event_sock.so 00:02:44.549 CC module/event/subsystems/accel/accel.o 00:02:44.549 LIB libspdk_event_accel.a 00:02:44.809 SO libspdk_event_accel.so.6.0 00:02:44.809 SYMLINK libspdk_event_accel.so 00:02:45.069 CC module/event/subsystems/bdev/bdev.o 00:02:45.329 LIB libspdk_event_bdev.a 00:02:45.329 SO libspdk_event_bdev.so.6.0 00:02:45.329 SYMLINK libspdk_event_bdev.so 00:02:45.587 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.587 CC module/event/subsystems/scsi/scsi.o 00:02:45.587 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.587 CC module/event/subsystems/nbd/nbd.o 00:02:45.846 CC module/event/subsystems/ublk/ublk.o 00:02:45.846 LIB libspdk_event_scsi.a 00:02:45.846 LIB libspdk_event_nbd.a 00:02:45.846 LIB libspdk_event_ublk.a 00:02:45.846 SO libspdk_event_scsi.so.6.0 00:02:45.846 LIB libspdk_event_nvmf.a 00:02:45.846 SO libspdk_event_nbd.so.6.0 00:02:45.846 SO libspdk_event_ublk.so.3.0 00:02:45.846 SO libspdk_event_nvmf.so.6.0 00:02:45.846 SYMLINK libspdk_event_scsi.so 00:02:45.846 SYMLINK libspdk_event_nbd.so 00:02:45.846 SYMLINK libspdk_event_ublk.so 00:02:46.106 SYMLINK libspdk_event_nvmf.so 00:02:46.365 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.365 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.365 LIB libspdk_event_iscsi.a 00:02:46.365 LIB libspdk_event_vhost_scsi.a 00:02:46.365 SO libspdk_event_iscsi.so.6.0 00:02:46.365 SO libspdk_event_vhost_scsi.so.3.0 00:02:46.625 SYMLINK libspdk_event_iscsi.so 00:02:46.625 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.625 SO libspdk.so.6.0 00:02:46.625 SYMLINK libspdk.so 00:02:47.198 CC app/spdk_nvme_perf/perf.o 00:02:47.198 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.198 CXX app/trace/trace.o 00:02:47.198 CC app/trace_record/trace_record.o 00:02:47.198 CC app/spdk_top/spdk_top.o 00:02:47.198 CC app/spdk_lspci/spdk_lspci.o 00:02:47.198 CC test/rpc_client/rpc_client_test.o 00:02:47.198 CC app/spdk_nvme_identify/identify.o 00:02:47.198 TEST_HEADER include/spdk/accel.h 00:02:47.198 TEST_HEADER include/spdk/accel_module.h 00:02:47.198 TEST_HEADER include/spdk/barrier.h 00:02:47.198 TEST_HEADER include/spdk/assert.h 00:02:47.198 TEST_HEADER include/spdk/base64.h 00:02:47.198 TEST_HEADER include/spdk/bdev_module.h 00:02:47.198 TEST_HEADER include/spdk/bdev.h 00:02:47.198 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.198 TEST_HEADER include/spdk/bit_array.h 00:02:47.198 TEST_HEADER include/spdk/bit_pool.h 00:02:47.198 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.198 TEST_HEADER include/spdk/blobfs.h 00:02:47.198 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.198 TEST_HEADER include/spdk/blob.h 00:02:47.198 TEST_HEADER include/spdk/conf.h 00:02:47.198 TEST_HEADER include/spdk/config.h 00:02:47.198 TEST_HEADER include/spdk/cpuset.h 00:02:47.198 TEST_HEADER include/spdk/crc16.h 00:02:47.198 TEST_HEADER include/spdk/crc32.h 00:02:47.198 TEST_HEADER include/spdk/dif.h 00:02:47.198 TEST_HEADER include/spdk/crc64.h 00:02:47.198 TEST_HEADER include/spdk/dma.h 00:02:47.198 TEST_HEADER include/spdk/endian.h 00:02:47.198 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.198 TEST_HEADER include/spdk/env.h 00:02:47.198 TEST_HEADER include/spdk/event.h 00:02:47.198 TEST_HEADER include/spdk/fd_group.h 00:02:47.198 TEST_HEADER include/spdk/fd.h 00:02:47.198 TEST_HEADER include/spdk/file.h 00:02:47.198 TEST_HEADER include/spdk/fsdev.h 00:02:47.198 TEST_HEADER include/spdk/fsdev_module.h 00:02:47.198 TEST_HEADER include/spdk/ftl.h 00:02:47.198 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:47.198 TEST_HEADER include/spdk/hexlify.h 00:02:47.198 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.198 TEST_HEADER include/spdk/histogram_data.h 00:02:47.198 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.198 TEST_HEADER include/spdk/idxd.h 00:02:47.198 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.198 TEST_HEADER include/spdk/ioat.h 00:02:47.198 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.198 TEST_HEADER include/spdk/init.h 00:02:47.198 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.198 TEST_HEADER include/spdk/json.h 00:02:47.198 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.198 TEST_HEADER include/spdk/keyring_module.h 00:02:47.198 TEST_HEADER include/spdk/keyring.h 00:02:47.198 TEST_HEADER include/spdk/likely.h 00:02:47.198 TEST_HEADER include/spdk/log.h 00:02:47.198 TEST_HEADER include/spdk/md5.h 00:02:47.198 TEST_HEADER include/spdk/lvol.h 00:02:47.198 CC app/spdk_dd/spdk_dd.o 00:02:47.198 TEST_HEADER include/spdk/memory.h 00:02:47.198 TEST_HEADER include/spdk/mmio.h 00:02:47.198 TEST_HEADER include/spdk/nbd.h 00:02:47.198 CC app/nvmf_tgt/nvmf_main.o 00:02:47.199 TEST_HEADER include/spdk/net.h 00:02:47.199 TEST_HEADER include/spdk/notify.h 00:02:47.199 TEST_HEADER include/spdk/nvme.h 00:02:47.199 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.199 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.199 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.199 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.199 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.199 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.199 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.199 TEST_HEADER include/spdk/nvmf.h 00:02:47.199 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.199 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.199 TEST_HEADER include/spdk/opal.h 00:02:47.199 TEST_HEADER include/spdk/opal_spec.h 00:02:47.199 TEST_HEADER include/spdk/pci_ids.h 00:02:47.199 TEST_HEADER include/spdk/pipe.h 00:02:47.199 TEST_HEADER include/spdk/queue.h 00:02:47.199 TEST_HEADER include/spdk/rpc.h 00:02:47.199 TEST_HEADER include/spdk/reduce.h 00:02:47.199 TEST_HEADER include/spdk/scsi.h 00:02:47.199 TEST_HEADER include/spdk/scheduler.h 00:02:47.199 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.199 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.199 TEST_HEADER include/spdk/sock.h 00:02:47.199 TEST_HEADER include/spdk/stdinc.h 00:02:47.199 TEST_HEADER include/spdk/string.h 00:02:47.199 TEST_HEADER include/spdk/thread.h 00:02:47.199 TEST_HEADER include/spdk/trace.h 00:02:47.199 TEST_HEADER include/spdk/trace_parser.h 00:02:47.199 TEST_HEADER include/spdk/tree.h 00:02:47.199 TEST_HEADER include/spdk/ublk.h 00:02:47.199 TEST_HEADER include/spdk/util.h 00:02:47.199 TEST_HEADER include/spdk/uuid.h 00:02:47.199 TEST_HEADER include/spdk/version.h 00:02:47.199 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.199 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.199 TEST_HEADER include/spdk/vhost.h 00:02:47.199 TEST_HEADER include/spdk/vmd.h 00:02:47.199 TEST_HEADER include/spdk/xor.h 00:02:47.199 TEST_HEADER include/spdk/zipf.h 00:02:47.199 CXX test/cpp_headers/accel.o 00:02:47.199 CXX test/cpp_headers/accel_module.o 00:02:47.199 CXX test/cpp_headers/assert.o 00:02:47.199 CXX test/cpp_headers/barrier.o 00:02:47.199 CXX test/cpp_headers/base64.o 00:02:47.199 CC app/spdk_tgt/spdk_tgt.o 00:02:47.199 CXX test/cpp_headers/bdev.o 00:02:47.199 CXX test/cpp_headers/bdev_module.o 00:02:47.199 CXX test/cpp_headers/bit_array.o 00:02:47.199 CXX test/cpp_headers/bdev_zone.o 00:02:47.199 CXX test/cpp_headers/bit_pool.o 00:02:47.199 CXX test/cpp_headers/blob_bdev.o 00:02:47.199 CXX test/cpp_headers/blobfs.o 00:02:47.199 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.199 CXX test/cpp_headers/conf.o 00:02:47.199 CXX test/cpp_headers/blob.o 00:02:47.199 CXX test/cpp_headers/config.o 00:02:47.199 CXX test/cpp_headers/cpuset.o 00:02:47.199 CXX test/cpp_headers/crc16.o 00:02:47.199 CXX test/cpp_headers/crc64.o 00:02:47.199 CXX test/cpp_headers/crc32.o 00:02:47.199 CXX test/cpp_headers/dif.o 00:02:47.199 CXX test/cpp_headers/dma.o 00:02:47.199 CXX test/cpp_headers/endian.o 00:02:47.199 CXX test/cpp_headers/env.o 00:02:47.199 CXX test/cpp_headers/env_dpdk.o 00:02:47.199 CXX test/cpp_headers/event.o 00:02:47.199 CXX test/cpp_headers/fd_group.o 00:02:47.199 CXX test/cpp_headers/fd.o 00:02:47.199 CXX test/cpp_headers/file.o 00:02:47.199 CXX test/cpp_headers/fsdev.o 00:02:47.199 CXX test/cpp_headers/fsdev_module.o 00:02:47.199 CXX test/cpp_headers/ftl.o 00:02:47.199 CXX test/cpp_headers/fuse_dispatcher.o 00:02:47.199 CXX test/cpp_headers/gpt_spec.o 00:02:47.199 CXX test/cpp_headers/hexlify.o 00:02:47.199 CXX test/cpp_headers/histogram_data.o 00:02:47.199 CXX test/cpp_headers/idxd.o 00:02:47.199 CXX test/cpp_headers/idxd_spec.o 00:02:47.199 CXX test/cpp_headers/init.o 00:02:47.199 CXX test/cpp_headers/ioat.o 00:02:47.199 CXX test/cpp_headers/ioat_spec.o 00:02:47.199 CXX test/cpp_headers/iscsi_spec.o 00:02:47.199 CC examples/ioat/verify/verify.o 00:02:47.199 CXX test/cpp_headers/json.o 00:02:47.199 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.199 CC app/fio/nvme/fio_plugin.o 00:02:47.199 CC examples/ioat/perf/perf.o 00:02:47.199 CC test/env/memory/memory_ut.o 00:02:47.199 CC test/env/pci/pci_ut.o 00:02:47.199 CC test/thread/poller_perf/poller_perf.o 00:02:47.199 CC test/env/vtophys/vtophys.o 00:02:47.199 CC examples/util/zipf/zipf.o 00:02:47.199 CC test/app/jsoncat/jsoncat.o 00:02:47.199 CC test/app/histogram_perf/histogram_perf.o 00:02:47.199 CC test/app/stub/stub.o 00:02:47.199 CC app/fio/bdev/fio_plugin.o 00:02:47.199 CC test/dma/test_dma/test_dma.o 00:02:47.459 CC test/app/bdev_svc/bdev_svc.o 00:02:47.459 LINK spdk_lspci 00:02:47.459 LINK rpc_client_test 00:02:47.459 LINK spdk_nvme_discover 00:02:47.459 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.459 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.459 LINK interrupt_tgt 00:02:47.459 LINK iscsi_tgt 00:02:47.459 LINK nvmf_tgt 00:02:47.720 LINK poller_perf 00:02:47.720 LINK vtophys 00:02:47.720 LINK spdk_trace_record 00:02:47.720 LINK zipf 00:02:47.720 LINK env_dpdk_post_init 00:02:47.720 LINK spdk_tgt 00:02:47.720 LINK jsoncat 00:02:47.720 LINK histogram_perf 00:02:47.720 CXX test/cpp_headers/jsonrpc.o 00:02:47.720 CXX test/cpp_headers/keyring.o 00:02:47.720 CXX test/cpp_headers/keyring_module.o 00:02:47.720 CXX test/cpp_headers/likely.o 00:02:47.720 CXX test/cpp_headers/log.o 00:02:47.720 CXX test/cpp_headers/lvol.o 00:02:47.720 CXX test/cpp_headers/md5.o 00:02:47.720 CXX test/cpp_headers/memory.o 00:02:47.720 CXX test/cpp_headers/mmio.o 00:02:47.720 CXX test/cpp_headers/nbd.o 00:02:47.720 CXX test/cpp_headers/net.o 00:02:47.720 CXX test/cpp_headers/notify.o 00:02:47.720 CXX test/cpp_headers/nvme.o 00:02:47.720 CXX test/cpp_headers/nvme_intel.o 00:02:47.720 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.720 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.720 LINK stub 00:02:47.720 CXX test/cpp_headers/nvme_spec.o 00:02:47.720 CXX test/cpp_headers/nvme_zns.o 00:02:47.720 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.720 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.720 CXX test/cpp_headers/nvmf.o 00:02:47.721 CXX test/cpp_headers/nvmf_spec.o 00:02:47.721 CXX test/cpp_headers/nvmf_transport.o 00:02:47.721 CXX test/cpp_headers/opal.o 00:02:47.721 CXX test/cpp_headers/opal_spec.o 00:02:47.721 CXX test/cpp_headers/pci_ids.o 00:02:47.721 CXX test/cpp_headers/pipe.o 00:02:47.721 CXX test/cpp_headers/queue.o 00:02:47.721 CXX test/cpp_headers/reduce.o 00:02:47.721 CXX test/cpp_headers/rpc.o 00:02:47.721 CXX test/cpp_headers/scheduler.o 00:02:47.721 CXX test/cpp_headers/scsi.o 00:02:47.721 CXX test/cpp_headers/scsi_spec.o 00:02:47.721 CXX test/cpp_headers/sock.o 00:02:47.721 CXX test/cpp_headers/stdinc.o 00:02:47.721 CXX test/cpp_headers/string.o 00:02:47.721 CXX test/cpp_headers/thread.o 00:02:47.721 CXX test/cpp_headers/trace.o 00:02:47.721 CXX test/cpp_headers/trace_parser.o 00:02:47.721 CXX test/cpp_headers/tree.o 00:02:47.721 CXX test/cpp_headers/ublk.o 00:02:47.721 LINK verify 00:02:47.721 CXX test/cpp_headers/util.o 00:02:47.721 CXX test/cpp_headers/uuid.o 00:02:47.721 LINK bdev_svc 00:02:47.721 CXX test/cpp_headers/version.o 00:02:47.721 LINK ioat_perf 00:02:47.721 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.981 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.981 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.981 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.981 LINK spdk_dd 00:02:47.981 CXX test/cpp_headers/vhost.o 00:02:47.981 CXX test/cpp_headers/vmd.o 00:02:47.981 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.981 CXX test/cpp_headers/xor.o 00:02:47.981 LINK spdk_trace 00:02:47.981 CXX test/cpp_headers/zipf.o 00:02:48.240 LINK pci_ut 00:02:48.240 CC test/event/reactor/reactor.o 00:02:48.240 CC test/event/reactor_perf/reactor_perf.o 00:02:48.240 CC test/event/event_perf/event_perf.o 00:02:48.240 CC examples/sock/hello_world/hello_sock.o 00:02:48.240 CC examples/idxd/perf/perf.o 00:02:48.240 CC examples/vmd/led/led.o 00:02:48.240 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.240 CC test/event/app_repeat/app_repeat.o 00:02:48.240 CC examples/thread/thread/thread_ex.o 00:02:48.240 CC test/event/scheduler/scheduler.o 00:02:48.240 LINK spdk_bdev 00:02:48.240 LINK test_dma 00:02:48.240 LINK nvme_fuzz 00:02:48.498 LINK spdk_nvme 00:02:48.498 CC app/vhost/vhost.o 00:02:48.498 LINK reactor 00:02:48.498 LINK mem_callbacks 00:02:48.498 LINK lsvmd 00:02:48.498 LINK event_perf 00:02:48.498 LINK reactor_perf 00:02:48.498 LINK led 00:02:48.498 LINK app_repeat 00:02:48.498 LINK vhost_fuzz 00:02:48.498 LINK scheduler 00:02:48.498 LINK hello_sock 00:02:48.498 LINK thread 00:02:48.498 LINK spdk_nvme_perf 00:02:48.757 LINK vhost 00:02:48.757 LINK idxd_perf 00:02:48.757 LINK spdk_top 00:02:48.757 LINK spdk_nvme_identify 00:02:48.757 CC test/nvme/err_injection/err_injection.o 00:02:48.757 CC test/nvme/reset/reset.o 00:02:48.757 CC test/nvme/connect_stress/connect_stress.o 00:02:48.757 CC test/nvme/cuse/cuse.o 00:02:48.757 CC test/nvme/sgl/sgl.o 00:02:48.757 CC test/nvme/simple_copy/simple_copy.o 00:02:48.757 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.757 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.757 CC test/nvme/aer/aer.o 00:02:48.757 CC test/nvme/fdp/fdp.o 00:02:48.757 CC test/nvme/reserve/reserve.o 00:02:48.757 CC test/nvme/overhead/overhead.o 00:02:48.757 CC test/nvme/compliance/nvme_compliance.o 00:02:48.757 CC test/nvme/e2edp/nvme_dp.o 00:02:48.757 CC test/nvme/startup/startup.o 00:02:48.757 CC test/nvme/boot_partition/boot_partition.o 00:02:48.757 CC test/accel/dif/dif.o 00:02:48.757 CC test/blobfs/mkfs/mkfs.o 00:02:49.015 CC test/lvol/esnap/esnap.o 00:02:49.015 LINK memory_ut 00:02:49.015 CC examples/nvme/reconnect/reconnect.o 00:02:49.015 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.015 CC examples/nvme/arbitration/arbitration.o 00:02:49.015 CC examples/nvme/hotplug/hotplug.o 00:02:49.015 CC examples/nvme/abort/abort.o 00:02:49.015 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.015 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.015 CC examples/nvme/hello_world/hello_world.o 00:02:49.015 LINK err_injection 00:02:49.015 LINK connect_stress 00:02:49.015 CC examples/accel/perf/accel_perf.o 00:02:49.015 LINK doorbell_aers 00:02:49.015 LINK boot_partition 00:02:49.015 LINK startup 00:02:49.015 LINK mkfs 00:02:49.015 CC examples/blob/cli/blobcli.o 00:02:49.015 LINK fused_ordering 00:02:49.015 CC examples/blob/hello_world/hello_blob.o 00:02:49.015 LINK reserve 00:02:49.015 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:49.015 LINK simple_copy 00:02:49.015 LINK reset 00:02:49.015 LINK sgl 00:02:49.273 LINK nvme_dp 00:02:49.273 LINK overhead 00:02:49.273 LINK cmb_copy 00:02:49.273 LINK pmr_persistence 00:02:49.273 LINK aer 00:02:49.273 LINK nvme_compliance 00:02:49.273 LINK fdp 00:02:49.273 LINK hello_world 00:02:49.273 LINK hotplug 00:02:49.273 LINK arbitration 00:02:49.273 LINK hello_blob 00:02:49.273 LINK reconnect 00:02:49.273 LINK abort 00:02:49.531 LINK hello_fsdev 00:02:49.531 LINK nvme_manage 00:02:49.531 LINK blobcli 00:02:49.531 LINK accel_perf 00:02:49.531 LINK dif 00:02:49.789 LINK iscsi_fuzz 00:02:50.048 LINK cuse 00:02:50.048 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.048 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.306 CC test/bdev/bdevio/bdevio.o 00:02:50.306 LINK hello_bdev 00:02:50.564 LINK bdevio 00:02:50.823 LINK bdevperf 00:02:51.390 CC examples/nvmf/nvmf/nvmf.o 00:02:51.649 LINK nvmf 00:02:54.181 LINK esnap 00:02:54.181 00:02:54.181 real 1m1.370s 00:02:54.181 user 8m47.392s 00:02:54.181 sys 3m25.259s 00:02:54.181 01:44:13 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:54.181 01:44:13 make -- common/autotest_common.sh@10 -- $ set +x 00:02:54.181 ************************************ 00:02:54.181 END TEST make 00:02:54.181 ************************************ 00:02:54.181 01:44:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:54.181 01:44:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:54.181 01:44:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:54.181 01:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.181 01:44:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:54.181 01:44:13 -- pm/common@44 -- $ pid=3003484 00:02:54.181 01:44:13 -- pm/common@50 -- $ kill -TERM 3003484 00:02:54.181 01:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.181 01:44:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:54.181 01:44:13 -- pm/common@44 -- $ pid=3003485 00:02:54.181 01:44:13 -- pm/common@50 -- $ kill -TERM 3003485 00:02:54.181 01:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.181 01:44:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:54.181 01:44:13 -- pm/common@44 -- $ pid=3003487 00:02:54.181 01:44:13 -- pm/common@50 -- $ kill -TERM 3003487 00:02:54.181 01:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.181 01:44:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:54.181 01:44:13 -- pm/common@44 -- $ pid=3003510 00:02:54.181 01:44:13 -- pm/common@50 -- $ sudo -E kill -TERM 3003510 00:02:54.181 01:44:13 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:54.181 01:44:13 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:54.181 01:44:13 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:54.440 01:44:14 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:54.440 01:44:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.440 01:44:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.440 01:44:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.440 01:44:14 -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.440 01:44:14 -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.440 01:44:14 -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.440 01:44:14 -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.440 01:44:14 -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.440 01:44:14 -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.440 01:44:14 -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.440 01:44:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.440 01:44:14 -- scripts/common.sh@344 -- # case "$op" in 00:02:54.440 01:44:14 -- scripts/common.sh@345 -- # : 1 00:02:54.440 01:44:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.440 01:44:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.440 01:44:14 -- scripts/common.sh@365 -- # decimal 1 00:02:54.440 01:44:14 -- scripts/common.sh@353 -- # local d=1 00:02:54.440 01:44:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.440 01:44:14 -- scripts/common.sh@355 -- # echo 1 00:02:54.440 01:44:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.440 01:44:14 -- scripts/common.sh@366 -- # decimal 2 00:02:54.440 01:44:14 -- scripts/common.sh@353 -- # local d=2 00:02:54.440 01:44:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.440 01:44:14 -- scripts/common.sh@355 -- # echo 2 00:02:54.440 01:44:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.440 01:44:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.440 01:44:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.440 01:44:14 -- scripts/common.sh@368 -- # return 0 00:02:54.440 01:44:14 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.440 01:44:14 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.440 --rc genhtml_branch_coverage=1 00:02:54.440 --rc genhtml_function_coverage=1 00:02:54.440 --rc genhtml_legend=1 00:02:54.440 --rc geninfo_all_blocks=1 00:02:54.440 --rc geninfo_unexecuted_blocks=1 00:02:54.440 00:02:54.440 ' 00:02:54.440 01:44:14 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.440 --rc genhtml_branch_coverage=1 00:02:54.440 --rc genhtml_function_coverage=1 00:02:54.440 --rc genhtml_legend=1 00:02:54.440 --rc geninfo_all_blocks=1 00:02:54.440 --rc geninfo_unexecuted_blocks=1 00:02:54.440 00:02:54.440 ' 00:02:54.440 01:44:14 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.440 --rc genhtml_branch_coverage=1 00:02:54.440 --rc genhtml_function_coverage=1 00:02:54.440 --rc genhtml_legend=1 00:02:54.440 --rc geninfo_all_blocks=1 00:02:54.440 --rc geninfo_unexecuted_blocks=1 00:02:54.440 00:02:54.440 ' 00:02:54.440 01:44:14 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.440 --rc genhtml_branch_coverage=1 00:02:54.440 --rc genhtml_function_coverage=1 00:02:54.440 --rc genhtml_legend=1 00:02:54.440 --rc geninfo_all_blocks=1 00:02:54.440 --rc geninfo_unexecuted_blocks=1 00:02:54.440 00:02:54.440 ' 00:02:54.440 01:44:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:02:54.440 01:44:14 -- nvmf/common.sh@7 -- # uname -s 00:02:54.440 01:44:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:54.440 01:44:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:54.440 01:44:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:54.440 01:44:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:54.441 01:44:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:54.441 01:44:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:54.441 01:44:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:54.441 01:44:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:54.441 01:44:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:54.441 01:44:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:54.441 01:44:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:02:54.441 01:44:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:02:54.441 01:44:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:54.441 01:44:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:54.441 01:44:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:54.441 01:44:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:54.441 01:44:14 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:02:54.441 01:44:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:54.441 01:44:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:54.441 01:44:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.441 01:44:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.441 01:44:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.441 01:44:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.441 01:44:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.441 01:44:14 -- paths/export.sh@5 -- # export PATH 00:02:54.441 01:44:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.441 01:44:14 -- nvmf/common.sh@51 -- # : 0 00:02:54.441 01:44:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:54.441 01:44:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:54.441 01:44:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:54.441 01:44:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:54.441 01:44:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:54.441 01:44:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:54.441 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:54.441 01:44:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:54.441 01:44:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:54.441 01:44:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:54.441 01:44:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:54.441 01:44:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:54.441 01:44:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:54.441 01:44:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:54.441 01:44:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:54.441 01:44:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:54.441 01:44:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:54.441 01:44:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:54.441 01:44:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:54.441 01:44:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:54.441 01:44:14 -- spdk/autotest.sh@48 -- # udevadm_pid=3063459 00:02:54.441 01:44:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:54.441 01:44:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:54.441 01:44:14 -- pm/common@17 -- # local monitor 00:02:54.441 01:44:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.441 01:44:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.441 01:44:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.441 01:44:14 -- pm/common@21 -- # date +%s 00:02:54.441 01:44:14 -- pm/common@21 -- # date +%s 00:02:54.441 01:44:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.441 01:44:14 -- pm/common@21 -- # date +%s 00:02:54.441 01:44:14 -- pm/common@25 -- # sleep 1 00:02:54.441 01:44:14 -- pm/common@21 -- # date +%s 00:02:54.441 01:44:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728431054 00:02:54.441 01:44:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728431054 00:02:54.441 01:44:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728431054 00:02:54.441 01:44:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728431054 00:02:54.441 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728431054_collect-cpu-temp.pm.log 00:02:54.441 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728431054_collect-cpu-load.pm.log 00:02:54.441 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728431054_collect-vmstat.pm.log 00:02:54.441 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728431054_collect-bmc-pm.bmc.pm.log 00:02:55.378 01:44:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.378 01:44:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:55.378 01:44:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:55.378 01:44:15 -- common/autotest_common.sh@10 -- # set +x 00:02:55.378 01:44:15 -- spdk/autotest.sh@59 -- # create_test_list 00:02:55.378 01:44:15 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:55.378 01:44:15 -- common/autotest_common.sh@10 -- # set +x 00:02:55.378 01:44:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh 00:02:55.378 01:44:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:55.378 01:44:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:55.378 01:44:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:02:55.378 01:44:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:55.378 01:44:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:55.378 01:44:15 -- common/autotest_common.sh@1455 -- # uname 00:02:55.378 01:44:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:55.378 01:44:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:55.378 01:44:15 -- common/autotest_common.sh@1475 -- # uname 00:02:55.637 01:44:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:55.637 01:44:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:55.638 01:44:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:55.638 lcov: LCOV version 1.15 00:02:55.638 01:44:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info 00:03:17.578 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:20.892 01:44:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:20.892 01:44:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:20.892 01:44:40 -- common/autotest_common.sh@10 -- # set +x 00:03:20.892 01:44:40 -- spdk/autotest.sh@78 -- # rm -f 00:03:20.892 01:44:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.183 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:24.183 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.183 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.183 01:44:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:24.183 01:44:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:24.183 01:44:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:24.183 01:44:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:24.183 01:44:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:24.183 01:44:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:24.183 01:44:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:24.183 01:44:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.183 01:44:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:24.183 01:44:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:24.183 01:44:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.183 01:44:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:24.183 01:44:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:24.183 01:44:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:24.183 01:44:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:24.183 No valid GPT data, bailing 00:03:24.183 01:44:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.183 01:44:43 -- scripts/common.sh@394 -- # pt= 00:03:24.183 01:44:43 -- scripts/common.sh@395 -- # return 1 00:03:24.183 01:44:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:24.183 1+0 records in 00:03:24.183 1+0 records out 00:03:24.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678707 s, 154 MB/s 00:03:24.183 01:44:43 -- spdk/autotest.sh@105 -- # sync 00:03:24.183 01:44:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:24.183 01:44:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:24.183 01:44:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.756 01:44:49 -- spdk/autotest.sh@111 -- # uname -s 00:03:30.756 01:44:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:30.756 01:44:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:30.756 01:44:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:03:32.663 Hugepages 00:03:32.663 node hugesize free / total 00:03:32.663 node0 1048576kB 0 / 0 00:03:32.663 node0 2048kB 0 / 0 00:03:32.663 node1 1048576kB 0 / 0 00:03:32.663 node1 2048kB 0 / 0 00:03:32.663 00:03:32.663 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:32.663 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:32.663 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:32.663 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:32.663 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:32.922 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:32.922 01:44:52 -- spdk/autotest.sh@117 -- # uname -s 00:03:32.922 01:44:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:32.922 01:44:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:32.922 01:44:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:36.219 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.219 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.512 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:39.512 01:44:59 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:40.477 01:45:00 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:40.477 01:45:00 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:40.477 01:45:00 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:40.477 01:45:00 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:40.477 01:45:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:40.477 01:45:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:40.477 01:45:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.477 01:45:00 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:40.477 01:45:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:40.477 01:45:00 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:40.477 01:45:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:40.477 01:45:00 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.769 Waiting for block devices as requested 00:03:43.769 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:43.769 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:43.769 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:44.028 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:44.028 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:44.028 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:44.028 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:44.287 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:44.287 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:44.287 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:44.546 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:44.546 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:44.546 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:44.805 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:44.805 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:44.805 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:45.065 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:45.065 01:45:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:45.065 01:45:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:45.065 01:45:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:45.065 01:45:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:45.065 01:45:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:45.065 01:45:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:45.065 01:45:04 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:45.065 01:45:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:45.065 01:45:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:45.065 01:45:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:45.065 01:45:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:45.065 01:45:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:45.065 01:45:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:45.065 01:45:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:45.065 01:45:04 -- common/autotest_common.sh@1541 -- # continue 00:03:45.065 01:45:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:45.065 01:45:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:45.065 01:45:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.065 01:45:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:45.065 01:45:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:45.065 01:45:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.065 01:45:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:48.355 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.355 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.929 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.246 01:45:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:51.246 01:45:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:51.246 01:45:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.246 01:45:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:51.246 01:45:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:51.246 01:45:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.246 01:45:10 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:51.246 01:45:10 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:51.246 01:45:10 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:51.246 01:45:10 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:51.246 01:45:10 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:51.246 01:45:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.246 01:45:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.246 01:45:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.246 01:45:10 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.246 01:45:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.246 01:45:10 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.246 01:45:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:51.246 01:45:10 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:51.246 01:45:10 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:51.246 01:45:10 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:51.246 01:45:10 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:51.246 01:45:10 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:51.246 01:45:10 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:51.246 01:45:10 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:51.246 01:45:10 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:51.246 01:45:10 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.246 01:45:10 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3076606 00:03:51.246 01:45:10 -- common/autotest_common.sh@1583 -- # waitforlisten 3076606 00:03:51.246 01:45:10 -- common/autotest_common.sh@831 -- # '[' -z 3076606 ']' 00:03:51.246 01:45:10 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.246 01:45:10 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:51.246 01:45:10 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.246 01:45:10 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:51.246 01:45:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.505 [2024-10-09 01:45:11.065569] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:03:51.505 [2024-10-09 01:45:11.065676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076606 ] 00:03:51.505 [2024-10-09 01:45:11.191239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.764 [2024-10-09 01:45:11.380662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.333 01:45:12 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:52.333 01:45:12 -- common/autotest_common.sh@864 -- # return 0 00:03:52.333 01:45:12 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:52.333 01:45:12 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:52.333 01:45:12 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:55.625 nvme0n1 00:03:55.625 01:45:15 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:55.625 [2024-10-09 01:45:15.391829] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:55.625 request: 00:03:55.625 { 00:03:55.625 "nvme_ctrlr_name": "nvme0", 00:03:55.625 "password": "test", 00:03:55.625 "method": "bdev_nvme_opal_revert", 00:03:55.625 "req_id": 1 00:03:55.625 } 00:03:55.625 Got JSON-RPC error response 00:03:55.625 response: 00:03:55.625 { 00:03:55.625 "code": -32602, 00:03:55.625 "message": "Invalid parameters" 00:03:55.625 } 00:03:55.625 01:45:15 -- common/autotest_common.sh@1589 -- # true 00:03:55.625 01:45:15 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:55.625 01:45:15 -- common/autotest_common.sh@1593 -- # killprocess 3076606 00:03:55.625 01:45:15 -- common/autotest_common.sh@950 -- # '[' -z 3076606 ']' 00:03:55.625 01:45:15 -- common/autotest_common.sh@954 -- # kill -0 3076606 00:03:55.625 01:45:15 -- common/autotest_common.sh@955 -- # uname 00:03:55.625 01:45:15 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.625 01:45:15 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076606 00:03:55.884 01:45:15 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.884 01:45:15 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.884 01:45:15 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076606' 00:03:55.884 killing process with pid 3076606 00:03:55.884 01:45:15 -- common/autotest_common.sh@969 -- # kill 3076606 00:03:55.884 01:45:15 -- common/autotest_common.sh@974 -- # wait 3076606 00:04:02.456 01:45:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.456 01:45:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.456 01:45:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.456 01:45:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.456 01:45:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.456 01:45:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.456 01:45:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.456 01:45:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.456 01:45:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:02.456 01:45:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.456 01:45:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.456 01:45:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.456 ************************************ 00:04:02.456 START TEST env 00:04:02.456 ************************************ 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:02.456 * Looking for test storage... 00:04:02.456 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:02.456 01:45:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.456 01:45:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.456 01:45:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.456 01:45:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.456 01:45:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.456 01:45:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.456 01:45:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.456 01:45:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.456 01:45:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.456 01:45:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.456 01:45:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.456 01:45:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.456 01:45:21 env -- scripts/common.sh@345 -- # : 1 00:04:02.456 01:45:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.456 01:45:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.456 01:45:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.456 01:45:21 env -- scripts/common.sh@353 -- # local d=1 00:04:02.456 01:45:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.456 01:45:21 env -- scripts/common.sh@355 -- # echo 1 00:04:02.456 01:45:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.456 01:45:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.456 01:45:21 env -- scripts/common.sh@353 -- # local d=2 00:04:02.456 01:45:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.456 01:45:21 env -- scripts/common.sh@355 -- # echo 2 00:04:02.456 01:45:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.456 01:45:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.456 01:45:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.456 01:45:21 env -- scripts/common.sh@368 -- # return 0 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.456 --rc genhtml_branch_coverage=1 00:04:02.456 --rc genhtml_function_coverage=1 00:04:02.456 --rc genhtml_legend=1 00:04:02.456 --rc geninfo_all_blocks=1 00:04:02.456 --rc geninfo_unexecuted_blocks=1 00:04:02.456 00:04:02.456 ' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.456 --rc genhtml_branch_coverage=1 00:04:02.456 --rc genhtml_function_coverage=1 00:04:02.456 --rc genhtml_legend=1 00:04:02.456 --rc geninfo_all_blocks=1 00:04:02.456 --rc geninfo_unexecuted_blocks=1 00:04:02.456 00:04:02.456 ' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.456 --rc genhtml_branch_coverage=1 00:04:02.456 --rc genhtml_function_coverage=1 00:04:02.456 --rc genhtml_legend=1 00:04:02.456 --rc geninfo_all_blocks=1 00:04:02.456 --rc geninfo_unexecuted_blocks=1 00:04:02.456 00:04:02.456 ' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.456 --rc genhtml_branch_coverage=1 00:04:02.456 --rc genhtml_function_coverage=1 00:04:02.456 --rc genhtml_legend=1 00:04:02.456 --rc geninfo_all_blocks=1 00:04:02.456 --rc geninfo_unexecuted_blocks=1 00:04:02.456 00:04:02.456 ' 00:04:02.456 01:45:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.456 01:45:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.456 01:45:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.456 ************************************ 00:04:02.456 START TEST env_memory 00:04:02.456 ************************************ 00:04:02.456 01:45:21 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.456 00:04:02.456 00:04:02.456 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.456 http://cunit.sourceforge.net/ 00:04:02.456 00:04:02.456 00:04:02.456 Suite: memory 00:04:02.457 Test: alloc and free memory map ...[2024-10-09 01:45:21.676576] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.457 passed 00:04:02.457 Test: mem map translation ...[2024-10-09 01:45:21.713554] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.457 [2024-10-09 01:45:21.713583] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.457 [2024-10-09 01:45:21.713657] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.457 [2024-10-09 01:45:21.713678] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.457 passed 00:04:02.457 Test: mem map registration ...[2024-10-09 01:45:21.771314] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.457 [2024-10-09 01:45:21.771341] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.457 passed 00:04:02.457 Test: mem map adjacent registrations ...passed 00:04:02.457 00:04:02.457 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.457 suites 1 1 n/a 0 0 00:04:02.457 tests 4 4 4 0 0 00:04:02.457 asserts 152 152 152 0 n/a 00:04:02.457 00:04:02.457 Elapsed time = 0.207 seconds 00:04:02.457 00:04:02.457 real 0m0.246s 00:04:02.457 user 0m0.215s 00:04:02.457 sys 0m0.030s 00:04:02.457 01:45:21 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.457 01:45:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.457 ************************************ 00:04:02.457 END TEST env_memory 00:04:02.457 ************************************ 00:04:02.457 01:45:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.457 01:45:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.457 01:45:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.457 01:45:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.457 ************************************ 00:04:02.457 START TEST env_vtophys 00:04:02.457 ************************************ 00:04:02.457 01:45:21 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.457 EAL: lib.eal log level changed from notice to debug 00:04:02.457 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.457 EAL: Detected lcore 1 as core 1 on socket 0 00:04:02.457 EAL: Detected lcore 2 as core 2 on socket 0 00:04:02.457 EAL: Detected lcore 3 as core 3 on socket 0 00:04:02.457 EAL: Detected lcore 4 as core 4 on socket 0 00:04:02.457 EAL: Detected lcore 5 as core 8 on socket 0 00:04:02.457 EAL: Detected lcore 6 as core 9 on socket 0 00:04:02.457 EAL: Detected lcore 7 as core 10 on socket 0 00:04:02.457 EAL: Detected lcore 8 as core 11 on socket 0 00:04:02.457 EAL: Detected lcore 9 as core 16 on socket 0 00:04:02.457 EAL: Detected lcore 10 as core 17 on socket 0 00:04:02.457 EAL: Detected lcore 11 as core 18 on socket 0 00:04:02.457 EAL: Detected lcore 12 as core 19 on socket 0 00:04:02.457 EAL: Detected lcore 13 as core 20 on socket 0 00:04:02.457 EAL: Detected lcore 14 as core 24 on socket 0 00:04:02.457 EAL: Detected lcore 15 as core 25 on socket 0 00:04:02.457 EAL: Detected lcore 16 as core 26 on socket 0 00:04:02.457 EAL: Detected lcore 17 as core 27 on socket 0 00:04:02.457 EAL: Detected lcore 18 as core 0 on socket 1 00:04:02.457 EAL: Detected lcore 19 as core 1 on socket 1 00:04:02.457 EAL: Detected lcore 20 as core 2 on socket 1 00:04:02.457 EAL: Detected lcore 21 as core 3 on socket 1 00:04:02.457 EAL: Detected lcore 22 as core 4 on socket 1 00:04:02.457 EAL: Detected lcore 23 as core 8 on socket 1 00:04:02.457 EAL: Detected lcore 24 as core 9 on socket 1 00:04:02.457 EAL: Detected lcore 25 as core 10 on socket 1 00:04:02.457 EAL: Detected lcore 26 as core 11 on socket 1 00:04:02.457 EAL: Detected lcore 27 as core 16 on socket 1 00:04:02.457 EAL: Detected lcore 28 as core 17 on socket 1 00:04:02.457 EAL: Detected lcore 29 as core 18 on socket 1 00:04:02.457 EAL: Detected lcore 30 as core 19 on socket 1 00:04:02.457 EAL: Detected lcore 31 as core 20 on socket 1 00:04:02.457 EAL: Detected lcore 32 as core 24 on socket 1 00:04:02.457 EAL: Detected lcore 33 as core 25 on socket 1 00:04:02.457 EAL: Detected lcore 34 as core 26 on socket 1 00:04:02.457 EAL: Detected lcore 35 as core 27 on socket 1 00:04:02.457 EAL: Detected lcore 36 as core 0 on socket 0 00:04:02.457 EAL: Detected lcore 37 as core 1 on socket 0 00:04:02.457 EAL: Detected lcore 38 as core 2 on socket 0 00:04:02.457 EAL: Detected lcore 39 as core 3 on socket 0 00:04:02.457 EAL: Detected lcore 40 as core 4 on socket 0 00:04:02.457 EAL: Detected lcore 41 as core 8 on socket 0 00:04:02.457 EAL: Detected lcore 42 as core 9 on socket 0 00:04:02.457 EAL: Detected lcore 43 as core 10 on socket 0 00:04:02.457 EAL: Detected lcore 44 as core 11 on socket 0 00:04:02.457 EAL: Detected lcore 45 as core 16 on socket 0 00:04:02.457 EAL: Detected lcore 46 as core 17 on socket 0 00:04:02.457 EAL: Detected lcore 47 as core 18 on socket 0 00:04:02.457 EAL: Detected lcore 48 as core 19 on socket 0 00:04:02.457 EAL: Detected lcore 49 as core 20 on socket 0 00:04:02.457 EAL: Detected lcore 50 as core 24 on socket 0 00:04:02.457 EAL: Detected lcore 51 as core 25 on socket 0 00:04:02.457 EAL: Detected lcore 52 as core 26 on socket 0 00:04:02.457 EAL: Detected lcore 53 as core 27 on socket 0 00:04:02.457 EAL: Detected lcore 54 as core 0 on socket 1 00:04:02.457 EAL: Detected lcore 55 as core 1 on socket 1 00:04:02.457 EAL: Detected lcore 56 as core 2 on socket 1 00:04:02.457 EAL: Detected lcore 57 as core 3 on socket 1 00:04:02.457 EAL: Detected lcore 58 as core 4 on socket 1 00:04:02.457 EAL: Detected lcore 59 as core 8 on socket 1 00:04:02.457 EAL: Detected lcore 60 as core 9 on socket 1 00:04:02.457 EAL: Detected lcore 61 as core 10 on socket 1 00:04:02.457 EAL: Detected lcore 62 as core 11 on socket 1 00:04:02.457 EAL: Detected lcore 63 as core 16 on socket 1 00:04:02.457 EAL: Detected lcore 64 as core 17 on socket 1 00:04:02.457 EAL: Detected lcore 65 as core 18 on socket 1 00:04:02.457 EAL: Detected lcore 66 as core 19 on socket 1 00:04:02.457 EAL: Detected lcore 67 as core 20 on socket 1 00:04:02.457 EAL: Detected lcore 68 as core 24 on socket 1 00:04:02.457 EAL: Detected lcore 69 as core 25 on socket 1 00:04:02.457 EAL: Detected lcore 70 as core 26 on socket 1 00:04:02.457 EAL: Detected lcore 71 as core 27 on socket 1 00:04:02.457 EAL: Maximum logical cores by configuration: 128 00:04:02.457 EAL: Detected CPU lcores: 72 00:04:02.457 EAL: Detected NUMA nodes: 2 00:04:02.457 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.457 EAL: Detected shared linkage of DPDK 00:04:02.457 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.457 EAL: Bus pci wants IOVA as 'DC' 00:04:02.457 EAL: Buses did not request a specific IOVA mode. 00:04:02.457 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:02.457 EAL: Selected IOVA mode 'VA' 00:04:02.457 EAL: Probing VFIO support... 00:04:02.457 EAL: IOMMU type 1 (Type 1) is supported 00:04:02.457 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:02.457 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:02.457 EAL: VFIO support initialized 00:04:02.457 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.457 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.457 EAL: Setting up physically contiguous memory... 00:04:02.457 EAL: Setting maximum number of open files to 524288 00:04:02.457 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.457 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:02.457 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.457 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:02.457 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.457 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:02.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.457 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.457 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:02.457 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:02.457 EAL: Hugepages will be freed exactly as allocated. 00:04:02.457 EAL: No shared files mode enabled, IPC is disabled 00:04:02.457 EAL: No shared files mode enabled, IPC is disabled 00:04:02.457 EAL: TSC frequency is ~2300000 KHz 00:04:02.457 EAL: Main lcore 0 is ready (tid=7f38a33a8a40;cpuset=[0]) 00:04:02.457 EAL: Trying to obtain current memory policy. 00:04:02.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.458 EAL: Restoring previous memory policy: 0 00:04:02.458 EAL: request: mp_malloc_sync 00:04:02.458 EAL: No shared files mode enabled, IPC is disabled 00:04:02.458 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.458 EAL: No shared files mode enabled, IPC is disabled 00:04:02.458 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.458 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.458 00:04:02.458 00:04:02.458 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.458 http://cunit.sourceforge.net/ 00:04:02.458 00:04:02.458 00:04:02.458 Suite: components_suite 00:04:02.717 Test: vtophys_malloc_test ...passed 00:04:02.717 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.717 EAL: Restoring previous memory policy: 4 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.717 EAL: Trying to obtain current memory policy. 00:04:02.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.717 EAL: Restoring previous memory policy: 4 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.717 EAL: Trying to obtain current memory policy. 00:04:02.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.717 EAL: Restoring previous memory policy: 4 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.717 EAL: request: mp_malloc_sync 00:04:02.717 EAL: No shared files mode enabled, IPC is disabled 00:04:02.717 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.717 EAL: Trying to obtain current memory policy. 00:04:02.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.976 EAL: Restoring previous memory policy: 4 00:04:02.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.976 EAL: request: mp_malloc_sync 00:04:02.976 EAL: No shared files mode enabled, IPC is disabled 00:04:02.976 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.976 EAL: request: mp_malloc_sync 00:04:02.976 EAL: No shared files mode enabled, IPC is disabled 00:04:02.976 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.976 EAL: Trying to obtain current memory policy. 00:04:02.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.976 EAL: Restoring previous memory policy: 4 00:04:02.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.976 EAL: request: mp_malloc_sync 00:04:02.976 EAL: No shared files mode enabled, IPC is disabled 00:04:02.976 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.976 EAL: request: mp_malloc_sync 00:04:02.976 EAL: No shared files mode enabled, IPC is disabled 00:04:02.976 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.977 EAL: Trying to obtain current memory policy. 00:04:02.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.977 EAL: Restoring previous memory policy: 4 00:04:02.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.977 EAL: request: mp_malloc_sync 00:04:02.977 EAL: No shared files mode enabled, IPC is disabled 00:04:02.977 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.236 EAL: request: mp_malloc_sync 00:04:03.236 EAL: No shared files mode enabled, IPC is disabled 00:04:03.236 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.236 EAL: Trying to obtain current memory policy. 00:04:03.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.236 EAL: Restoring previous memory policy: 4 00:04:03.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.236 EAL: request: mp_malloc_sync 00:04:03.236 EAL: No shared files mode enabled, IPC is disabled 00:04:03.236 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.494 EAL: request: mp_malloc_sync 00:04:03.494 EAL: No shared files mode enabled, IPC is disabled 00:04:03.494 EAL: Heap on socket 0 was shrunk by 130MB 00:04:03.754 EAL: Trying to obtain current memory policy. 00:04:03.754 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.754 EAL: Restoring previous memory policy: 4 00:04:03.754 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.754 EAL: request: mp_malloc_sync 00:04:03.754 EAL: No shared files mode enabled, IPC is disabled 00:04:03.754 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.272 EAL: request: mp_malloc_sync 00:04:04.272 EAL: No shared files mode enabled, IPC is disabled 00:04:04.272 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.531 EAL: Trying to obtain current memory policy. 00:04:04.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.791 EAL: Restoring previous memory policy: 4 00:04:04.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.791 EAL: request: mp_malloc_sync 00:04:04.791 EAL: No shared files mode enabled, IPC is disabled 00:04:04.791 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.729 EAL: request: mp_malloc_sync 00:04:05.729 EAL: No shared files mode enabled, IPC is disabled 00:04:05.729 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.296 EAL: Trying to obtain current memory policy. 00:04:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.555 EAL: Restoring previous memory policy: 4 00:04:06.555 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.555 EAL: request: mp_malloc_sync 00:04:06.555 EAL: No shared files mode enabled, IPC is disabled 00:04:06.555 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.459 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.459 EAL: request: mp_malloc_sync 00:04:08.459 EAL: No shared files mode enabled, IPC is disabled 00:04:08.459 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.364 passed 00:04:10.364 00:04:10.364 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.364 suites 1 1 n/a 0 0 00:04:10.364 tests 2 2 2 0 0 00:04:10.364 asserts 497 497 497 0 n/a 00:04:10.364 00:04:10.364 Elapsed time = 7.537 seconds 00:04:10.364 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.364 EAL: request: mp_malloc_sync 00:04:10.364 EAL: No shared files mode enabled, IPC is disabled 00:04:10.364 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.364 EAL: No shared files mode enabled, IPC is disabled 00:04:10.364 EAL: No shared files mode enabled, IPC is disabled 00:04:10.364 EAL: No shared files mode enabled, IPC is disabled 00:04:10.364 00:04:10.364 real 0m7.790s 00:04:10.364 user 0m6.825s 00:04:10.364 sys 0m0.914s 00:04:10.364 01:45:29 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.364 01:45:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.364 ************************************ 00:04:10.364 END TEST env_vtophys 00:04:10.364 ************************************ 00:04:10.364 01:45:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.364 01:45:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.364 01:45:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.364 01:45:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.364 ************************************ 00:04:10.364 START TEST env_pci 00:04:10.364 ************************************ 00:04:10.364 01:45:29 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.364 00:04:10.364 00:04:10.364 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.364 http://cunit.sourceforge.net/ 00:04:10.364 00:04:10.364 00:04:10.364 Suite: pci 00:04:10.364 Test: pci_hook ...[2024-10-09 01:45:29.834294] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3079137 has claimed it 00:04:10.364 EAL: Cannot find device (10000:00:01.0) 00:04:10.364 EAL: Failed to attach device on primary process 00:04:10.364 passed 00:04:10.364 00:04:10.364 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.364 suites 1 1 n/a 0 0 00:04:10.364 tests 1 1 1 0 0 00:04:10.364 asserts 25 25 25 0 n/a 00:04:10.364 00:04:10.364 Elapsed time = 0.048 seconds 00:04:10.364 00:04:10.364 real 0m0.125s 00:04:10.364 user 0m0.046s 00:04:10.364 sys 0m0.079s 00:04:10.365 01:45:29 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.365 01:45:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.365 ************************************ 00:04:10.365 END TEST env_pci 00:04:10.365 ************************************ 00:04:10.365 01:45:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.365 01:45:29 env -- env/env.sh@15 -- # uname 00:04:10.365 01:45:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.365 01:45:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.365 01:45:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.365 01:45:29 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:10.365 01:45:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.365 01:45:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.365 ************************************ 00:04:10.365 START TEST env_dpdk_post_init 00:04:10.365 ************************************ 00:04:10.365 01:45:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.365 EAL: Detected CPU lcores: 72 00:04:10.365 EAL: Detected NUMA nodes: 2 00:04:10.365 EAL: Detected shared linkage of DPDK 00:04:10.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.365 EAL: Selected IOVA mode 'VA' 00:04:10.365 EAL: VFIO support initialized 00:04:10.365 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.624 EAL: Using IOMMU type 1 (Type 1) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.624 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:10.624 EAL: Ignore mapping IO port bar(1) 00:04:10.625 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:11.563 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:11.563 EAL: Ignore mapping IO port bar(1) 00:04:11.563 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:16.836 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:16.836 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:17.096 Starting DPDK initialization... 00:04:17.096 Starting SPDK post initialization... 00:04:17.096 SPDK NVMe probe 00:04:17.096 Attaching to 0000:5e:00.0 00:04:17.096 Attached to 0000:5e:00.0 00:04:17.096 Cleaning up... 00:04:17.096 00:04:17.096 real 0m6.825s 00:04:17.096 user 0m5.034s 00:04:17.096 sys 0m0.859s 00:04:17.096 01:45:36 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.096 01:45:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.096 ************************************ 00:04:17.096 END TEST env_dpdk_post_init 00:04:17.096 ************************************ 00:04:17.096 01:45:36 env -- env/env.sh@26 -- # uname 00:04:17.096 01:45:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.096 01:45:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.096 01:45:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.096 01:45:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.096 01:45:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 ************************************ 00:04:17.356 START TEST env_mem_callbacks 00:04:17.356 ************************************ 00:04:17.356 01:45:36 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.356 EAL: Detected CPU lcores: 72 00:04:17.356 EAL: Detected NUMA nodes: 2 00:04:17.356 EAL: Detected shared linkage of DPDK 00:04:17.356 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.356 EAL: Selected IOVA mode 'VA' 00:04:17.356 EAL: VFIO support initialized 00:04:17.356 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.356 00:04:17.356 00:04:17.356 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.356 http://cunit.sourceforge.net/ 00:04:17.356 00:04:17.356 00:04:17.356 Suite: memory 00:04:17.356 Test: test ... 00:04:17.356 register 0x200000200000 2097152 00:04:17.356 malloc 3145728 00:04:17.356 register 0x200000400000 4194304 00:04:17.356 buf 0x2000004fffc0 len 3145728 PASSED 00:04:17.356 malloc 64 00:04:17.356 buf 0x2000004ffec0 len 64 PASSED 00:04:17.356 malloc 4194304 00:04:17.356 register 0x200000800000 6291456 00:04:17.356 buf 0x2000009fffc0 len 4194304 PASSED 00:04:17.356 free 0x2000004fffc0 3145728 00:04:17.356 free 0x2000004ffec0 64 00:04:17.356 unregister 0x200000400000 4194304 PASSED 00:04:17.356 free 0x2000009fffc0 4194304 00:04:17.356 unregister 0x200000800000 6291456 PASSED 00:04:17.356 malloc 8388608 00:04:17.356 register 0x200000400000 10485760 00:04:17.356 buf 0x2000005fffc0 len 8388608 PASSED 00:04:17.356 free 0x2000005fffc0 8388608 00:04:17.356 unregister 0x200000400000 10485760 PASSED 00:04:17.356 passed 00:04:17.356 00:04:17.356 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.356 suites 1 1 n/a 0 0 00:04:17.356 tests 1 1 1 0 0 00:04:17.356 asserts 15 15 15 0 n/a 00:04:17.356 00:04:17.356 Elapsed time = 0.064 seconds 00:04:17.356 00:04:17.356 real 0m0.201s 00:04:17.356 user 0m0.089s 00:04:17.356 sys 0m0.111s 00:04:17.356 01:45:37 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.356 01:45:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 ************************************ 00:04:17.356 END TEST env_mem_callbacks 00:04:17.356 ************************************ 00:04:17.356 00:04:17.356 real 0m15.737s 00:04:17.356 user 0m12.446s 00:04:17.356 sys 0m2.342s 00:04:17.356 01:45:37 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.356 01:45:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 ************************************ 00:04:17.356 END TEST env 00:04:17.356 ************************************ 00:04:17.616 01:45:37 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.616 01:45:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.616 01:45:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.616 01:45:37 -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 ************************************ 00:04:17.616 START TEST rpc 00:04:17.616 ************************************ 00:04:17.616 01:45:37 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.616 * Looking for test storage... 00:04:17.616 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:17.616 01:45:37 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:17.616 01:45:37 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:17.616 01:45:37 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:17.616 01:45:37 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:17.616 01:45:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.616 01:45:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.616 01:45:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.616 01:45:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.616 01:45:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.616 01:45:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.616 01:45:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.616 01:45:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.616 01:45:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.616 01:45:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.616 01:45:37 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.616 01:45:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.616 01:45:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.616 01:45:37 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.616 01:45:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.876 01:45:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.876 01:45:37 rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.876 01:45:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.876 01:45:37 rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.876 01:45:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.876 01:45:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.876 01:45:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.876 01:45:37 rpc -- scripts/common.sh@368 -- # return 0 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.876 --rc genhtml_branch_coverage=1 00:04:17.876 --rc genhtml_function_coverage=1 00:04:17.876 --rc genhtml_legend=1 00:04:17.876 --rc geninfo_all_blocks=1 00:04:17.876 --rc geninfo_unexecuted_blocks=1 00:04:17.876 00:04:17.876 ' 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.876 --rc genhtml_branch_coverage=1 00:04:17.876 --rc genhtml_function_coverage=1 00:04:17.876 --rc genhtml_legend=1 00:04:17.876 --rc geninfo_all_blocks=1 00:04:17.876 --rc geninfo_unexecuted_blocks=1 00:04:17.876 00:04:17.876 ' 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.876 --rc genhtml_branch_coverage=1 00:04:17.876 --rc genhtml_function_coverage=1 00:04:17.876 --rc genhtml_legend=1 00:04:17.876 --rc geninfo_all_blocks=1 00:04:17.876 --rc geninfo_unexecuted_blocks=1 00:04:17.876 00:04:17.876 ' 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.876 --rc genhtml_branch_coverage=1 00:04:17.876 --rc genhtml_function_coverage=1 00:04:17.876 --rc genhtml_legend=1 00:04:17.876 --rc geninfo_all_blocks=1 00:04:17.876 --rc geninfo_unexecuted_blocks=1 00:04:17.876 00:04:17.876 ' 00:04:17.876 01:45:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3080213 00:04:17.876 01:45:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.876 01:45:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3080213 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@831 -- # '[' -z 3080213 ']' 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.876 01:45:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 01:45:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:17.876 [2024-10-09 01:45:37.538870] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:17.876 [2024-10-09 01:45:37.538974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080213 ] 00:04:17.876 [2024-10-09 01:45:37.668195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.135 [2024-10-09 01:45:37.859523] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.135 [2024-10-09 01:45:37.859585] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3080213' to capture a snapshot of events at runtime. 00:04:18.135 [2024-10-09 01:45:37.859601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.135 [2024-10-09 01:45:37.859613] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.135 [2024-10-09 01:45:37.859625] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3080213 for offline analysis/debug. 00:04:18.135 [2024-10-09 01:45:37.860796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.074 01:45:38 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.074 01:45:38 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:19.074 01:45:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:19.074 01:45:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:19.074 01:45:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.074 01:45:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.074 01:45:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.074 01:45:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.074 01:45:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 ************************************ 00:04:19.074 START TEST rpc_integrity 00:04:19.074 ************************************ 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.074 { 00:04:19.074 "name": "Malloc0", 00:04:19.074 "aliases": [ 00:04:19.074 "00cb8d71-10f4-40f3-b191-8cd416006683" 00:04:19.074 ], 00:04:19.074 "product_name": "Malloc disk", 00:04:19.074 "block_size": 512, 00:04:19.074 "num_blocks": 16384, 00:04:19.074 "uuid": "00cb8d71-10f4-40f3-b191-8cd416006683", 00:04:19.074 "assigned_rate_limits": { 00:04:19.074 "rw_ios_per_sec": 0, 00:04:19.074 "rw_mbytes_per_sec": 0, 00:04:19.074 "r_mbytes_per_sec": 0, 00:04:19.074 "w_mbytes_per_sec": 0 00:04:19.074 }, 00:04:19.074 "claimed": false, 00:04:19.074 "zoned": false, 00:04:19.074 "supported_io_types": { 00:04:19.074 "read": true, 00:04:19.074 "write": true, 00:04:19.074 "unmap": true, 00:04:19.074 "flush": true, 00:04:19.074 "reset": true, 00:04:19.074 "nvme_admin": false, 00:04:19.074 "nvme_io": false, 00:04:19.074 "nvme_io_md": false, 00:04:19.074 "write_zeroes": true, 00:04:19.074 "zcopy": true, 00:04:19.074 "get_zone_info": false, 00:04:19.074 "zone_management": false, 00:04:19.074 "zone_append": false, 00:04:19.074 "compare": false, 00:04:19.074 "compare_and_write": false, 00:04:19.074 "abort": true, 00:04:19.074 "seek_hole": false, 00:04:19.074 "seek_data": false, 00:04:19.074 "copy": true, 00:04:19.074 "nvme_iov_md": false 00:04:19.074 }, 00:04:19.074 "memory_domains": [ 00:04:19.074 { 00:04:19.074 "dma_device_id": "system", 00:04:19.074 "dma_device_type": 1 00:04:19.074 }, 00:04:19.074 { 00:04:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.074 "dma_device_type": 2 00:04:19.074 } 00:04:19.074 ], 00:04:19.074 "driver_specific": {} 00:04:19.074 } 00:04:19.074 ]' 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 [2024-10-09 01:45:38.765045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.074 [2024-10-09 01:45:38.765095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.074 [2024-10-09 01:45:38.765120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001e680 00:04:19.074 [2024-10-09 01:45:38.765132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.074 [2024-10-09 01:45:38.767351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.074 [2024-10-09 01:45:38.767378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.074 Passthru0 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.074 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.074 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.074 { 00:04:19.074 "name": "Malloc0", 00:04:19.074 "aliases": [ 00:04:19.074 "00cb8d71-10f4-40f3-b191-8cd416006683" 00:04:19.074 ], 00:04:19.074 "product_name": "Malloc disk", 00:04:19.074 "block_size": 512, 00:04:19.074 "num_blocks": 16384, 00:04:19.074 "uuid": "00cb8d71-10f4-40f3-b191-8cd416006683", 00:04:19.074 "assigned_rate_limits": { 00:04:19.074 "rw_ios_per_sec": 0, 00:04:19.074 "rw_mbytes_per_sec": 0, 00:04:19.074 "r_mbytes_per_sec": 0, 00:04:19.074 "w_mbytes_per_sec": 0 00:04:19.074 }, 00:04:19.074 "claimed": true, 00:04:19.074 "claim_type": "exclusive_write", 00:04:19.074 "zoned": false, 00:04:19.074 "supported_io_types": { 00:04:19.074 "read": true, 00:04:19.074 "write": true, 00:04:19.074 "unmap": true, 00:04:19.074 "flush": true, 00:04:19.074 "reset": true, 00:04:19.074 "nvme_admin": false, 00:04:19.074 "nvme_io": false, 00:04:19.074 "nvme_io_md": false, 00:04:19.074 "write_zeroes": true, 00:04:19.074 "zcopy": true, 00:04:19.074 "get_zone_info": false, 00:04:19.074 "zone_management": false, 00:04:19.074 "zone_append": false, 00:04:19.074 "compare": false, 00:04:19.074 "compare_and_write": false, 00:04:19.074 "abort": true, 00:04:19.074 "seek_hole": false, 00:04:19.074 "seek_data": false, 00:04:19.074 "copy": true, 00:04:19.074 "nvme_iov_md": false 00:04:19.074 }, 00:04:19.074 "memory_domains": [ 00:04:19.074 { 00:04:19.074 "dma_device_id": "system", 00:04:19.074 "dma_device_type": 1 00:04:19.074 }, 00:04:19.074 { 00:04:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.074 "dma_device_type": 2 00:04:19.074 } 00:04:19.074 ], 00:04:19.074 "driver_specific": {} 00:04:19.074 }, 00:04:19.074 { 00:04:19.074 "name": "Passthru0", 00:04:19.074 "aliases": [ 00:04:19.074 "9c85f1bd-5255-53bc-b153-475d0d462fcb" 00:04:19.074 ], 00:04:19.074 "product_name": "passthru", 00:04:19.074 "block_size": 512, 00:04:19.074 "num_blocks": 16384, 00:04:19.074 "uuid": "9c85f1bd-5255-53bc-b153-475d0d462fcb", 00:04:19.074 "assigned_rate_limits": { 00:04:19.074 "rw_ios_per_sec": 0, 00:04:19.074 "rw_mbytes_per_sec": 0, 00:04:19.074 "r_mbytes_per_sec": 0, 00:04:19.074 "w_mbytes_per_sec": 0 00:04:19.074 }, 00:04:19.074 "claimed": false, 00:04:19.074 "zoned": false, 00:04:19.074 "supported_io_types": { 00:04:19.074 "read": true, 00:04:19.074 "write": true, 00:04:19.074 "unmap": true, 00:04:19.074 "flush": true, 00:04:19.074 "reset": true, 00:04:19.074 "nvme_admin": false, 00:04:19.074 "nvme_io": false, 00:04:19.074 "nvme_io_md": false, 00:04:19.074 "write_zeroes": true, 00:04:19.074 "zcopy": true, 00:04:19.074 "get_zone_info": false, 00:04:19.074 "zone_management": false, 00:04:19.075 "zone_append": false, 00:04:19.075 "compare": false, 00:04:19.075 "compare_and_write": false, 00:04:19.075 "abort": true, 00:04:19.075 "seek_hole": false, 00:04:19.075 "seek_data": false, 00:04:19.075 "copy": true, 00:04:19.075 "nvme_iov_md": false 00:04:19.075 }, 00:04:19.075 "memory_domains": [ 00:04:19.075 { 00:04:19.075 "dma_device_id": "system", 00:04:19.075 "dma_device_type": 1 00:04:19.075 }, 00:04:19.075 { 00:04:19.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.075 "dma_device_type": 2 00:04:19.075 } 00:04:19.075 ], 00:04:19.075 "driver_specific": { 00:04:19.075 "passthru": { 00:04:19.075 "name": "Passthru0", 00:04:19.075 "base_bdev_name": "Malloc0" 00:04:19.075 } 00:04:19.075 } 00:04:19.075 } 00:04:19.075 ]' 00:04:19.075 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.075 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.075 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.075 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.075 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.075 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.334 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.334 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.334 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.334 01:45:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.334 00:04:19.334 real 0m0.314s 00:04:19.334 user 0m0.165s 00:04:19.334 sys 0m0.052s 00:04:19.334 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.334 01:45:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.334 ************************************ 00:04:19.334 END TEST rpc_integrity 00:04:19.334 ************************************ 00:04:19.334 01:45:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.334 01:45:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.334 01:45:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.334 01:45:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.334 ************************************ 00:04:19.334 START TEST rpc_plugins 00:04:19.334 ************************************ 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:19.334 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.334 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.334 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.334 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.334 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.334 { 00:04:19.334 "name": "Malloc1", 00:04:19.334 "aliases": [ 00:04:19.334 "058ad212-5e74-431d-bb5d-8bef619f449f" 00:04:19.334 ], 00:04:19.334 "product_name": "Malloc disk", 00:04:19.334 "block_size": 4096, 00:04:19.334 "num_blocks": 256, 00:04:19.334 "uuid": "058ad212-5e74-431d-bb5d-8bef619f449f", 00:04:19.334 "assigned_rate_limits": { 00:04:19.334 "rw_ios_per_sec": 0, 00:04:19.335 "rw_mbytes_per_sec": 0, 00:04:19.335 "r_mbytes_per_sec": 0, 00:04:19.335 "w_mbytes_per_sec": 0 00:04:19.335 }, 00:04:19.335 "claimed": false, 00:04:19.335 "zoned": false, 00:04:19.335 "supported_io_types": { 00:04:19.335 "read": true, 00:04:19.335 "write": true, 00:04:19.335 "unmap": true, 00:04:19.335 "flush": true, 00:04:19.335 "reset": true, 00:04:19.335 "nvme_admin": false, 00:04:19.335 "nvme_io": false, 00:04:19.335 "nvme_io_md": false, 00:04:19.335 "write_zeroes": true, 00:04:19.335 "zcopy": true, 00:04:19.335 "get_zone_info": false, 00:04:19.335 "zone_management": false, 00:04:19.335 "zone_append": false, 00:04:19.335 "compare": false, 00:04:19.335 "compare_and_write": false, 00:04:19.335 "abort": true, 00:04:19.335 "seek_hole": false, 00:04:19.335 "seek_data": false, 00:04:19.335 "copy": true, 00:04:19.335 "nvme_iov_md": false 00:04:19.335 }, 00:04:19.335 "memory_domains": [ 00:04:19.335 { 00:04:19.335 "dma_device_id": "system", 00:04:19.335 "dma_device_type": 1 00:04:19.335 }, 00:04:19.335 { 00:04:19.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.335 "dma_device_type": 2 00:04:19.335 } 00:04:19.335 ], 00:04:19.335 "driver_specific": {} 00:04:19.335 } 00:04:19.335 ]' 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.335 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.335 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:19.594 01:45:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.594 00:04:19.594 real 0m0.154s 00:04:19.594 user 0m0.088s 00:04:19.594 sys 0m0.026s 00:04:19.594 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.594 01:45:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.594 ************************************ 00:04:19.594 END TEST rpc_plugins 00:04:19.594 ************************************ 00:04:19.594 01:45:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:19.594 01:45:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.594 01:45:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.594 01:45:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.594 ************************************ 00:04:19.594 START TEST rpc_trace_cmd_test 00:04:19.594 ************************************ 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:19.594 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3080213", 00:04:19.594 "tpoint_group_mask": "0x8", 00:04:19.594 "iscsi_conn": { 00:04:19.594 "mask": "0x2", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "scsi": { 00:04:19.594 "mask": "0x4", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "bdev": { 00:04:19.594 "mask": "0x8", 00:04:19.594 "tpoint_mask": "0xffffffffffffffff" 00:04:19.594 }, 00:04:19.594 "nvmf_rdma": { 00:04:19.594 "mask": "0x10", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "nvmf_tcp": { 00:04:19.594 "mask": "0x20", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "ftl": { 00:04:19.594 "mask": "0x40", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "blobfs": { 00:04:19.594 "mask": "0x80", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "dsa": { 00:04:19.594 "mask": "0x200", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "thread": { 00:04:19.594 "mask": "0x400", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "nvme_pcie": { 00:04:19.594 "mask": "0x800", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "iaa": { 00:04:19.594 "mask": "0x1000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "nvme_tcp": { 00:04:19.594 "mask": "0x2000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "bdev_nvme": { 00:04:19.594 "mask": "0x4000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "sock": { 00:04:19.594 "mask": "0x8000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "blob": { 00:04:19.594 "mask": "0x10000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "bdev_raid": { 00:04:19.594 "mask": "0x20000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 }, 00:04:19.594 "scheduler": { 00:04:19.594 "mask": "0x40000", 00:04:19.594 "tpoint_mask": "0x0" 00:04:19.594 } 00:04:19.594 }' 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.594 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.853 00:04:19.853 real 0m0.241s 00:04:19.853 user 0m0.191s 00:04:19.853 sys 0m0.041s 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.853 01:45:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 ************************************ 00:04:19.853 END TEST rpc_trace_cmd_test 00:04:19.853 ************************************ 00:04:19.853 01:45:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:19.853 01:45:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.853 01:45:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.853 01:45:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.853 01:45:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.853 01:45:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 ************************************ 00:04:19.853 START TEST rpc_daemon_integrity 00:04:19.853 ************************************ 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.853 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.853 { 00:04:19.853 "name": "Malloc2", 00:04:19.853 "aliases": [ 00:04:19.853 "53c895d7-370b-4578-b978-623fe5a19125" 00:04:19.853 ], 00:04:19.853 "product_name": "Malloc disk", 00:04:19.853 "block_size": 512, 00:04:19.853 "num_blocks": 16384, 00:04:19.853 "uuid": "53c895d7-370b-4578-b978-623fe5a19125", 00:04:19.853 "assigned_rate_limits": { 00:04:19.853 "rw_ios_per_sec": 0, 00:04:19.853 "rw_mbytes_per_sec": 0, 00:04:19.853 "r_mbytes_per_sec": 0, 00:04:19.853 "w_mbytes_per_sec": 0 00:04:19.853 }, 00:04:19.854 "claimed": false, 00:04:19.854 "zoned": false, 00:04:19.854 "supported_io_types": { 00:04:19.854 "read": true, 00:04:19.854 "write": true, 00:04:19.854 "unmap": true, 00:04:19.854 "flush": true, 00:04:19.854 "reset": true, 00:04:19.854 "nvme_admin": false, 00:04:19.854 "nvme_io": false, 00:04:19.854 "nvme_io_md": false, 00:04:19.854 "write_zeroes": true, 00:04:19.854 "zcopy": true, 00:04:19.854 "get_zone_info": false, 00:04:19.854 "zone_management": false, 00:04:19.854 "zone_append": false, 00:04:19.854 "compare": false, 00:04:19.854 "compare_and_write": false, 00:04:19.854 "abort": true, 00:04:19.854 "seek_hole": false, 00:04:19.854 "seek_data": false, 00:04:19.854 "copy": true, 00:04:19.854 "nvme_iov_md": false 00:04:19.854 }, 00:04:19.854 "memory_domains": [ 00:04:19.854 { 00:04:19.854 "dma_device_id": "system", 00:04:19.854 "dma_device_type": 1 00:04:19.854 }, 00:04:19.854 { 00:04:19.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.854 "dma_device_type": 2 00:04:19.854 } 00:04:19.854 ], 00:04:19.854 "driver_specific": {} 00:04:19.854 } 00:04:19.854 ]' 00:04:19.854 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 [2024-10-09 01:45:39.714870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:20.113 [2024-10-09 01:45:39.714913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.113 [2024-10-09 01:45:39.714937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001f880 00:04:20.113 [2024-10-09 01:45:39.714948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.113 [2024-10-09 01:45:39.717105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.113 [2024-10-09 01:45:39.717130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.113 Passthru0 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.113 { 00:04:20.113 "name": "Malloc2", 00:04:20.113 "aliases": [ 00:04:20.113 "53c895d7-370b-4578-b978-623fe5a19125" 00:04:20.113 ], 00:04:20.113 "product_name": "Malloc disk", 00:04:20.113 "block_size": 512, 00:04:20.113 "num_blocks": 16384, 00:04:20.113 "uuid": "53c895d7-370b-4578-b978-623fe5a19125", 00:04:20.113 "assigned_rate_limits": { 00:04:20.113 "rw_ios_per_sec": 0, 00:04:20.113 "rw_mbytes_per_sec": 0, 00:04:20.113 "r_mbytes_per_sec": 0, 00:04:20.113 "w_mbytes_per_sec": 0 00:04:20.113 }, 00:04:20.113 "claimed": true, 00:04:20.113 "claim_type": "exclusive_write", 00:04:20.113 "zoned": false, 00:04:20.113 "supported_io_types": { 00:04:20.113 "read": true, 00:04:20.113 "write": true, 00:04:20.113 "unmap": true, 00:04:20.113 "flush": true, 00:04:20.113 "reset": true, 00:04:20.113 "nvme_admin": false, 00:04:20.113 "nvme_io": false, 00:04:20.113 "nvme_io_md": false, 00:04:20.113 "write_zeroes": true, 00:04:20.113 "zcopy": true, 00:04:20.113 "get_zone_info": false, 00:04:20.113 "zone_management": false, 00:04:20.113 "zone_append": false, 00:04:20.113 "compare": false, 00:04:20.113 "compare_and_write": false, 00:04:20.113 "abort": true, 00:04:20.113 "seek_hole": false, 00:04:20.113 "seek_data": false, 00:04:20.113 "copy": true, 00:04:20.113 "nvme_iov_md": false 00:04:20.113 }, 00:04:20.113 "memory_domains": [ 00:04:20.113 { 00:04:20.113 "dma_device_id": "system", 00:04:20.113 "dma_device_type": 1 00:04:20.113 }, 00:04:20.113 { 00:04:20.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.113 "dma_device_type": 2 00:04:20.113 } 00:04:20.113 ], 00:04:20.113 "driver_specific": {} 00:04:20.113 }, 00:04:20.113 { 00:04:20.113 "name": "Passthru0", 00:04:20.113 "aliases": [ 00:04:20.113 "37a469f6-3dc6-5c12-8f16-7f0f37fd14b5" 00:04:20.113 ], 00:04:20.113 "product_name": "passthru", 00:04:20.113 "block_size": 512, 00:04:20.113 "num_blocks": 16384, 00:04:20.113 "uuid": "37a469f6-3dc6-5c12-8f16-7f0f37fd14b5", 00:04:20.113 "assigned_rate_limits": { 00:04:20.113 "rw_ios_per_sec": 0, 00:04:20.113 "rw_mbytes_per_sec": 0, 00:04:20.113 "r_mbytes_per_sec": 0, 00:04:20.113 "w_mbytes_per_sec": 0 00:04:20.113 }, 00:04:20.113 "claimed": false, 00:04:20.113 "zoned": false, 00:04:20.113 "supported_io_types": { 00:04:20.113 "read": true, 00:04:20.113 "write": true, 00:04:20.113 "unmap": true, 00:04:20.113 "flush": true, 00:04:20.113 "reset": true, 00:04:20.113 "nvme_admin": false, 00:04:20.113 "nvme_io": false, 00:04:20.113 "nvme_io_md": false, 00:04:20.113 "write_zeroes": true, 00:04:20.113 "zcopy": true, 00:04:20.113 "get_zone_info": false, 00:04:20.113 "zone_management": false, 00:04:20.113 "zone_append": false, 00:04:20.113 "compare": false, 00:04:20.113 "compare_and_write": false, 00:04:20.113 "abort": true, 00:04:20.113 "seek_hole": false, 00:04:20.113 "seek_data": false, 00:04:20.113 "copy": true, 00:04:20.113 "nvme_iov_md": false 00:04:20.113 }, 00:04:20.113 "memory_domains": [ 00:04:20.113 { 00:04:20.113 "dma_device_id": "system", 00:04:20.113 "dma_device_type": 1 00:04:20.113 }, 00:04:20.113 { 00:04:20.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.113 "dma_device_type": 2 00:04:20.113 } 00:04:20.113 ], 00:04:20.113 "driver_specific": { 00:04:20.113 "passthru": { 00:04:20.113 "name": "Passthru0", 00:04:20.113 "base_bdev_name": "Malloc2" 00:04:20.113 } 00:04:20.113 } 00:04:20.113 } 00:04:20.113 ]' 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.113 00:04:20.113 real 0m0.324s 00:04:20.113 user 0m0.171s 00:04:20.113 sys 0m0.059s 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.113 01:45:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.113 ************************************ 00:04:20.113 END TEST rpc_daemon_integrity 00:04:20.113 ************************************ 00:04:20.372 01:45:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.372 01:45:39 rpc -- rpc/rpc.sh@84 -- # killprocess 3080213 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@950 -- # '[' -z 3080213 ']' 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@954 -- # kill -0 3080213 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@955 -- # uname 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080213 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.372 01:45:39 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.373 01:45:40 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080213' 00:04:20.373 killing process with pid 3080213 00:04:20.373 01:45:40 rpc -- common/autotest_common.sh@969 -- # kill 3080213 00:04:20.373 01:45:40 rpc -- common/autotest_common.sh@974 -- # wait 3080213 00:04:22.907 00:04:22.907 real 0m5.177s 00:04:22.907 user 0m5.681s 00:04:22.907 sys 0m1.032s 00:04:22.907 01:45:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.907 01:45:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.907 ************************************ 00:04:22.907 END TEST rpc 00:04:22.907 ************************************ 00:04:22.907 01:45:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:22.907 01:45:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.907 01:45:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.907 01:45:42 -- common/autotest_common.sh@10 -- # set +x 00:04:22.907 ************************************ 00:04:22.907 START TEST skip_rpc 00:04:22.907 ************************************ 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:22.907 * Looking for test storage... 00:04:22.907 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.907 01:45:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.907 01:45:42 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:22.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.907 --rc genhtml_branch_coverage=1 00:04:22.907 --rc genhtml_function_coverage=1 00:04:22.907 --rc genhtml_legend=1 00:04:22.907 --rc geninfo_all_blocks=1 00:04:22.908 --rc geninfo_unexecuted_blocks=1 00:04:22.908 00:04:22.908 ' 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.908 --rc genhtml_branch_coverage=1 00:04:22.908 --rc genhtml_function_coverage=1 00:04:22.908 --rc genhtml_legend=1 00:04:22.908 --rc geninfo_all_blocks=1 00:04:22.908 --rc geninfo_unexecuted_blocks=1 00:04:22.908 00:04:22.908 ' 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.908 --rc genhtml_branch_coverage=1 00:04:22.908 --rc genhtml_function_coverage=1 00:04:22.908 --rc genhtml_legend=1 00:04:22.908 --rc geninfo_all_blocks=1 00:04:22.908 --rc geninfo_unexecuted_blocks=1 00:04:22.908 00:04:22.908 ' 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.908 --rc genhtml_branch_coverage=1 00:04:22.908 --rc genhtml_function_coverage=1 00:04:22.908 --rc genhtml_legend=1 00:04:22.908 --rc geninfo_all_blocks=1 00:04:22.908 --rc geninfo_unexecuted_blocks=1 00:04:22.908 00:04:22.908 ' 00:04:22.908 01:45:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:22.908 01:45:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:22.908 01:45:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.908 01:45:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.908 ************************************ 00:04:22.908 START TEST skip_rpc 00:04:22.908 ************************************ 00:04:22.908 01:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:23.167 01:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3081101 00:04:23.167 01:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.167 01:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.167 01:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.167 [2024-10-09 01:45:42.820878] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:23.167 [2024-10-09 01:45:42.820971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081101 ] 00:04:23.167 [2024-10-09 01:45:42.950550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.425 [2024-10-09 01:45:43.145083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3081101 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3081101 ']' 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3081101 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3081101 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3081101' 00:04:28.705 killing process with pid 3081101 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3081101 00:04:28.705 01:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3081101 00:04:30.611 00:04:30.611 real 0m7.500s 00:04:30.611 user 0m7.047s 00:04:30.611 sys 0m0.495s 00:04:30.611 01:45:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.611 01:45:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.611 ************************************ 00:04:30.611 END TEST skip_rpc 00:04:30.611 ************************************ 00:04:30.611 01:45:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.611 01:45:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.611 01:45:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.611 01:45:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.611 ************************************ 00:04:30.611 START TEST skip_rpc_with_json 00:04:30.611 ************************************ 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3082176 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3082176 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3082176 ']' 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.611 01:45:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.611 [2024-10-09 01:45:50.407936] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:30.611 [2024-10-09 01:45:50.408054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082176 ] 00:04:30.871 [2024-10-09 01:45:50.536980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.129 [2024-10-09 01:45:50.739023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.698 [2024-10-09 01:45:51.509762] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.698 request: 00:04:31.698 { 00:04:31.698 "trtype": "tcp", 00:04:31.698 "method": "nvmf_get_transports", 00:04:31.698 "req_id": 1 00:04:31.698 } 00:04:31.698 Got JSON-RPC error response 00:04:31.698 response: 00:04:31.698 { 00:04:31.698 "code": -19, 00:04:31.698 "message": "No such device" 00:04:31.698 } 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.698 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.698 [2024-10-09 01:45:51.517869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.958 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.958 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.959 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.959 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.959 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.959 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:31.959 { 00:04:31.959 "subsystems": [ 00:04:31.959 { 00:04:31.959 "subsystem": "fsdev", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "fsdev_set_opts", 00:04:31.959 "params": { 00:04:31.959 "fsdev_io_pool_size": 65535, 00:04:31.959 "fsdev_io_cache_size": 256 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "keyring", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "iobuf", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "iobuf_set_options", 00:04:31.959 "params": { 00:04:31.959 "small_pool_count": 8192, 00:04:31.959 "large_pool_count": 1024, 00:04:31.959 "small_bufsize": 8192, 00:04:31.959 "large_bufsize": 135168 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "sock", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "sock_set_default_impl", 00:04:31.959 "params": { 00:04:31.959 "impl_name": "posix" 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "sock_impl_set_options", 00:04:31.959 "params": { 00:04:31.959 "impl_name": "ssl", 00:04:31.959 "recv_buf_size": 4096, 00:04:31.959 "send_buf_size": 4096, 00:04:31.959 "enable_recv_pipe": true, 00:04:31.959 "enable_quickack": false, 00:04:31.959 "enable_placement_id": 0, 00:04:31.959 "enable_zerocopy_send_server": true, 00:04:31.959 "enable_zerocopy_send_client": false, 00:04:31.959 "zerocopy_threshold": 0, 00:04:31.959 "tls_version": 0, 00:04:31.959 "enable_ktls": false 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "sock_impl_set_options", 00:04:31.959 "params": { 00:04:31.959 "impl_name": "posix", 00:04:31.959 "recv_buf_size": 2097152, 00:04:31.959 "send_buf_size": 2097152, 00:04:31.959 "enable_recv_pipe": true, 00:04:31.959 "enable_quickack": false, 00:04:31.959 "enable_placement_id": 0, 00:04:31.959 "enable_zerocopy_send_server": true, 00:04:31.959 "enable_zerocopy_send_client": false, 00:04:31.959 "zerocopy_threshold": 0, 00:04:31.959 "tls_version": 0, 00:04:31.959 "enable_ktls": false 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "vmd", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "accel", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "accel_set_options", 00:04:31.959 "params": { 00:04:31.959 "small_cache_size": 128, 00:04:31.959 "large_cache_size": 16, 00:04:31.959 "task_count": 2048, 00:04:31.959 "sequence_count": 2048, 00:04:31.959 "buf_count": 2048 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "bdev", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "bdev_set_options", 00:04:31.959 "params": { 00:04:31.959 "bdev_io_pool_size": 65535, 00:04:31.959 "bdev_io_cache_size": 256, 00:04:31.959 "bdev_auto_examine": true, 00:04:31.959 "iobuf_small_cache_size": 128, 00:04:31.959 "iobuf_large_cache_size": 16 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "bdev_raid_set_options", 00:04:31.959 "params": { 00:04:31.959 "process_window_size_kb": 1024, 00:04:31.959 "process_max_bandwidth_mb_sec": 0 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "bdev_iscsi_set_options", 00:04:31.959 "params": { 00:04:31.959 "timeout_sec": 30 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "bdev_nvme_set_options", 00:04:31.959 "params": { 00:04:31.959 "action_on_timeout": "none", 00:04:31.959 "timeout_us": 0, 00:04:31.959 "timeout_admin_us": 0, 00:04:31.959 "keep_alive_timeout_ms": 10000, 00:04:31.959 "arbitration_burst": 0, 00:04:31.959 "low_priority_weight": 0, 00:04:31.959 "medium_priority_weight": 0, 00:04:31.959 "high_priority_weight": 0, 00:04:31.959 "nvme_adminq_poll_period_us": 10000, 00:04:31.959 "nvme_ioq_poll_period_us": 0, 00:04:31.959 "io_queue_requests": 0, 00:04:31.959 "delay_cmd_submit": true, 00:04:31.959 "transport_retry_count": 4, 00:04:31.959 "bdev_retry_count": 3, 00:04:31.959 "transport_ack_timeout": 0, 00:04:31.959 "ctrlr_loss_timeout_sec": 0, 00:04:31.959 "reconnect_delay_sec": 0, 00:04:31.959 "fast_io_fail_timeout_sec": 0, 00:04:31.959 "disable_auto_failback": false, 00:04:31.959 "generate_uuids": false, 00:04:31.959 "transport_tos": 0, 00:04:31.959 "nvme_error_stat": false, 00:04:31.959 "rdma_srq_size": 0, 00:04:31.959 "io_path_stat": false, 00:04:31.959 "allow_accel_sequence": false, 00:04:31.959 "rdma_max_cq_size": 0, 00:04:31.959 "rdma_cm_event_timeout_ms": 0, 00:04:31.959 "dhchap_digests": [ 00:04:31.959 "sha256", 00:04:31.959 "sha384", 00:04:31.959 "sha512" 00:04:31.959 ], 00:04:31.959 "dhchap_dhgroups": [ 00:04:31.959 "null", 00:04:31.959 "ffdhe2048", 00:04:31.959 "ffdhe3072", 00:04:31.959 "ffdhe4096", 00:04:31.959 "ffdhe6144", 00:04:31.959 "ffdhe8192" 00:04:31.959 ] 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "bdev_nvme_set_hotplug", 00:04:31.959 "params": { 00:04:31.959 "period_us": 100000, 00:04:31.959 "enable": false 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "bdev_wait_for_examine" 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "scsi", 00:04:31.959 "config": null 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "scheduler", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "framework_set_scheduler", 00:04:31.959 "params": { 00:04:31.959 "name": "static" 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "vhost_scsi", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "vhost_blk", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "ublk", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "nbd", 00:04:31.959 "config": [] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "nvmf", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "nvmf_set_config", 00:04:31.959 "params": { 00:04:31.959 "discovery_filter": "match_any", 00:04:31.959 "admin_cmd_passthru": { 00:04:31.959 "identify_ctrlr": false 00:04:31.959 }, 00:04:31.959 "dhchap_digests": [ 00:04:31.959 "sha256", 00:04:31.959 "sha384", 00:04:31.959 "sha512" 00:04:31.959 ], 00:04:31.959 "dhchap_dhgroups": [ 00:04:31.959 "null", 00:04:31.959 "ffdhe2048", 00:04:31.959 "ffdhe3072", 00:04:31.959 "ffdhe4096", 00:04:31.959 "ffdhe6144", 00:04:31.959 "ffdhe8192" 00:04:31.959 ] 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "nvmf_set_max_subsystems", 00:04:31.959 "params": { 00:04:31.959 "max_subsystems": 1024 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "nvmf_set_crdt", 00:04:31.959 "params": { 00:04:31.959 "crdt1": 0, 00:04:31.959 "crdt2": 0, 00:04:31.959 "crdt3": 0 00:04:31.959 } 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "method": "nvmf_create_transport", 00:04:31.959 "params": { 00:04:31.959 "trtype": "TCP", 00:04:31.959 "max_queue_depth": 128, 00:04:31.959 "max_io_qpairs_per_ctrlr": 127, 00:04:31.959 "in_capsule_data_size": 4096, 00:04:31.959 "max_io_size": 131072, 00:04:31.959 "io_unit_size": 131072, 00:04:31.959 "max_aq_depth": 128, 00:04:31.959 "num_shared_buffers": 511, 00:04:31.959 "buf_cache_size": 4294967295, 00:04:31.959 "dif_insert_or_strip": false, 00:04:31.959 "zcopy": false, 00:04:31.959 "c2h_success": true, 00:04:31.959 "sock_priority": 0, 00:04:31.959 "abort_timeout_sec": 1, 00:04:31.959 "ack_timeout": 0, 00:04:31.959 "data_wr_pool_size": 0 00:04:31.959 } 00:04:31.959 } 00:04:31.959 ] 00:04:31.959 }, 00:04:31.959 { 00:04:31.959 "subsystem": "iscsi", 00:04:31.959 "config": [ 00:04:31.959 { 00:04:31.959 "method": "iscsi_set_options", 00:04:31.959 "params": { 00:04:31.959 "node_base": "iqn.2016-06.io.spdk", 00:04:31.959 "max_sessions": 128, 00:04:31.959 "max_connections_per_session": 2, 00:04:31.959 "max_queue_depth": 64, 00:04:31.959 "default_time2wait": 2, 00:04:31.959 "default_time2retain": 20, 00:04:31.959 "first_burst_length": 8192, 00:04:31.959 "immediate_data": true, 00:04:31.959 "allow_duplicated_isid": false, 00:04:31.959 "error_recovery_level": 0, 00:04:31.959 "nop_timeout": 60, 00:04:31.959 "nop_in_interval": 30, 00:04:31.959 "disable_chap": false, 00:04:31.959 "require_chap": false, 00:04:31.959 "mutual_chap": false, 00:04:31.959 "chap_group": 0, 00:04:31.959 "max_large_datain_per_connection": 64, 00:04:31.959 "max_r2t_per_connection": 4, 00:04:31.959 "pdu_pool_size": 36864, 00:04:31.959 "immediate_data_pool_size": 16384, 00:04:31.960 "data_out_pool_size": 2048 00:04:31.960 } 00:04:31.960 } 00:04:31.960 ] 00:04:31.960 } 00:04:31.960 ] 00:04:31.960 } 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3082176 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3082176 ']' 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3082176 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082176 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082176' 00:04:31.960 killing process with pid 3082176 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3082176 00:04:31.960 01:45:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3082176 00:04:34.496 01:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3082713 00:04:34.496 01:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:34.496 01:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.766 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3082713 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3082713 ']' 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3082713 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082713 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082713' 00:04:39.767 killing process with pid 3082713 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3082713 00:04:39.767 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3082713 00:04:42.300 01:46:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:42.301 00:04:42.301 real 0m11.339s 00:04:42.301 user 0m10.769s 00:04:42.301 sys 0m1.017s 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.301 ************************************ 00:04:42.301 END TEST skip_rpc_with_json 00:04:42.301 ************************************ 00:04:42.301 01:46:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.301 ************************************ 00:04:42.301 START TEST skip_rpc_with_delay 00:04:42.301 ************************************ 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.301 [2024-10-09 01:46:01.831686] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.301 [2024-10-09 01:46:01.831805] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.301 00:04:42.301 real 0m0.170s 00:04:42.301 user 0m0.083s 00:04:42.301 sys 0m0.086s 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.301 01:46:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.301 ************************************ 00:04:42.301 END TEST skip_rpc_with_delay 00:04:42.301 ************************************ 00:04:42.301 01:46:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.301 01:46:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.301 01:46:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.301 01:46:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.301 ************************************ 00:04:42.301 START TEST exit_on_failed_rpc_init 00:04:42.301 ************************************ 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3083764 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3083764 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3083764 ']' 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.301 01:46:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.301 [2024-10-09 01:46:02.053633] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:42.301 [2024-10-09 01:46:02.053741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083764 ] 00:04:42.559 [2024-10-09 01:46:02.182328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.560 [2024-10-09 01:46:02.369766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.497 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.497 [2024-10-09 01:46:03.225026] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:43.497 [2024-10-09 01:46:03.225127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083853 ] 00:04:43.755 [2024-10-09 01:46:03.352614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.755 [2024-10-09 01:46:03.558365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.755 [2024-10-09 01:46:03.558470] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.755 [2024-10-09 01:46:03.558490] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.755 [2024-10-09 01:46:03.558502] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3083764 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3083764 ']' 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3083764 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.323 01:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3083764 00:04:44.323 01:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.323 01:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.323 01:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3083764' 00:04:44.323 killing process with pid 3083764 00:04:44.323 01:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3083764 00:04:44.323 01:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3083764 00:04:46.862 00:04:46.862 real 0m4.448s 00:04:46.862 user 0m4.967s 00:04:46.862 sys 0m0.705s 00:04:46.862 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.862 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.862 ************************************ 00:04:46.862 END TEST exit_on_failed_rpc_init 00:04:46.862 ************************************ 00:04:46.862 01:46:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:46.862 00:04:46.862 real 0m23.944s 00:04:46.862 user 0m23.065s 00:04:46.862 sys 0m2.627s 00:04:46.862 01:46:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.862 01:46:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.862 ************************************ 00:04:46.862 END TEST skip_rpc 00:04:46.862 ************************************ 00:04:46.862 01:46:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:46.862 01:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.862 01:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.862 01:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:46.862 ************************************ 00:04:46.862 START TEST rpc_client 00:04:46.862 ************************************ 00:04:46.862 01:46:06 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:46.862 * Looking for test storage... 00:04:46.862 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client 00:04:46.862 01:46:06 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.862 01:46:06 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.862 01:46:06 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.121 01:46:06 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.121 01:46:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.122 01:46:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.122 --rc genhtml_branch_coverage=1 00:04:47.122 --rc genhtml_function_coverage=1 00:04:47.122 --rc genhtml_legend=1 00:04:47.122 --rc geninfo_all_blocks=1 00:04:47.122 --rc geninfo_unexecuted_blocks=1 00:04:47.122 00:04:47.122 ' 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.122 --rc genhtml_branch_coverage=1 00:04:47.122 --rc genhtml_function_coverage=1 00:04:47.122 --rc genhtml_legend=1 00:04:47.122 --rc geninfo_all_blocks=1 00:04:47.122 --rc geninfo_unexecuted_blocks=1 00:04:47.122 00:04:47.122 ' 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.122 --rc genhtml_branch_coverage=1 00:04:47.122 --rc genhtml_function_coverage=1 00:04:47.122 --rc genhtml_legend=1 00:04:47.122 --rc geninfo_all_blocks=1 00:04:47.122 --rc geninfo_unexecuted_blocks=1 00:04:47.122 00:04:47.122 ' 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.122 --rc genhtml_branch_coverage=1 00:04:47.122 --rc genhtml_function_coverage=1 00:04:47.122 --rc genhtml_legend=1 00:04:47.122 --rc geninfo_all_blocks=1 00:04:47.122 --rc geninfo_unexecuted_blocks=1 00:04:47.122 00:04:47.122 ' 00:04:47.122 01:46:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:47.122 OK 00:04:47.122 01:46:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.122 00:04:47.122 real 0m0.268s 00:04:47.122 user 0m0.139s 00:04:47.122 sys 0m0.147s 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.122 01:46:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.122 ************************************ 00:04:47.122 END TEST rpc_client 00:04:47.122 ************************************ 00:04:47.122 01:46:06 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:47.122 01:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.122 01:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.122 01:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:47.122 ************************************ 00:04:47.122 START TEST json_config 00:04:47.122 ************************************ 00:04:47.122 01:46:06 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:47.382 01:46:06 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.382 01:46:06 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.382 01:46:06 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.382 01:46:07 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.382 01:46:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.382 01:46:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.382 01:46:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.382 01:46:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.382 01:46:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.382 01:46:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.382 01:46:07 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.382 01:46:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.382 01:46:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.382 01:46:07 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.382 01:46:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.382 01:46:07 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.382 01:46:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.382 01:46:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.382 01:46:07 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.383 --rc genhtml_branch_coverage=1 00:04:47.383 --rc genhtml_function_coverage=1 00:04:47.383 --rc genhtml_legend=1 00:04:47.383 --rc geninfo_all_blocks=1 00:04:47.383 --rc geninfo_unexecuted_blocks=1 00:04:47.383 00:04:47.383 ' 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.383 --rc genhtml_branch_coverage=1 00:04:47.383 --rc genhtml_function_coverage=1 00:04:47.383 --rc genhtml_legend=1 00:04:47.383 --rc geninfo_all_blocks=1 00:04:47.383 --rc geninfo_unexecuted_blocks=1 00:04:47.383 00:04:47.383 ' 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.383 --rc genhtml_branch_coverage=1 00:04:47.383 --rc genhtml_function_coverage=1 00:04:47.383 --rc genhtml_legend=1 00:04:47.383 --rc geninfo_all_blocks=1 00:04:47.383 --rc geninfo_unexecuted_blocks=1 00:04:47.383 00:04:47.383 ' 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.383 --rc genhtml_branch_coverage=1 00:04:47.383 --rc genhtml_function_coverage=1 00:04:47.383 --rc genhtml_legend=1 00:04:47.383 --rc geninfo_all_blocks=1 00:04:47.383 --rc geninfo_unexecuted_blocks=1 00:04:47.383 00:04:47.383 ' 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:47.383 01:46:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.383 01:46:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.383 01:46:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.383 01:46:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.383 01:46:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.383 01:46:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.383 01:46:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.383 01:46:07 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.383 01:46:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.383 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.383 01:46:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json') 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:47.383 INFO: JSON configuration test init 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.383 01:46:07 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:47.383 01:46:07 json_config -- json_config/common.sh@9 -- # local app=target 00:04:47.383 01:46:07 json_config -- json_config/common.sh@10 -- # shift 00:04:47.383 01:46:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.383 01:46:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.383 01:46:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.383 01:46:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.383 01:46:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.383 01:46:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3084496 00:04:47.383 01:46:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.383 Waiting for target to run... 00:04:47.383 01:46:07 json_config -- json_config/common.sh@25 -- # waitforlisten 3084496 /var/tmp/spdk_tgt.sock 00:04:47.383 01:46:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 3084496 ']' 00:04:47.384 01:46:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:47.384 01:46:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.384 01:46:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.384 01:46:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.384 01:46:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.384 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.384 [2024-10-09 01:46:07.178174] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:04:47.384 [2024-10-09 01:46:07.178277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084496 ] 00:04:47.951 [2024-10-09 01:46:07.530849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.951 [2024-10-09 01:46:07.707546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:48.210 01:46:07 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.210 00:04:48.210 01:46:07 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:48.210 01:46:07 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.210 01:46:07 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:48.210 01:46:07 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.210 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.210 01:46:08 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:48.210 01:46:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:48.210 01:46:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:52.546 01:46:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@54 -- # sort 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:52.546 01:46:11 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@440 -- # [[ phy-fallback != virt ]] 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:52.546 01:46:11 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:52.546 01:46:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:04:59.112 Found 0000:18:00.0 (0x8086 - 0x159b) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:04:59.112 Found 0000:18:00.1 (0x8086 - 0x159b) 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.112 01:46:18 json_config -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@401 -- # (( 0 != 1 )) 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@401 -- # modprobe -r irdma 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@403 -- # modinfo irdma 00:04:59.113 01:46:18 json_config -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:04:59.371 Found net devices under 0000:18:00.0: cvl_0_0 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.371 01:46:19 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:04:59.372 Found net devices under 0000:18:00.1: cvl_0_1 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@440 -- # is_hw=yes 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@446 -- # rdma_device_init 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@62 -- # uname 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@528 -- # allocate_nic_ips 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:59.372 01:46:19 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@108 -- # echo cvl_0_0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@108 -- # echo cvl_0_1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:04:59.632 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:59.632 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:04:59.632 altname enp24s0f0np0 00:04:59.632 altname ens785f0np0 00:04:59.632 inet 192.168.100.8/24 scope global cvl_0_0 00:04:59.632 valid_lft forever preferred_lft forever 00:04:59.632 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:04:59.632 valid_lft forever preferred_lft forever 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:04:59.632 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:59.632 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:04:59.632 altname enp24s0f1np1 00:04:59.632 altname ens785f1np1 00:04:59.632 inet 192.168.100.9/24 scope global cvl_0_1 00:04:59.632 valid_lft forever preferred_lft forever 00:04:59.632 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:04:59.632 valid_lft forever preferred_lft forever 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@448 -- # return 0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@108 -- # echo cvl_0_0 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:59.632 01:46:19 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@108 -- # echo cvl_0_1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:04:59.633 192.168.100.9' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:04:59.633 192.168.100.9' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@483 -- # head -n 1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:04:59.633 192.168.100.9' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@484 -- # tail -n +2 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@484 -- # head -n 1 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:04:59.633 01:46:19 json_config -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:04:59.633 01:46:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:59.633 01:46:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.633 01:46:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.892 MallocForNvmf0 00:04:59.892 01:46:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.892 01:46:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.150 MallocForNvmf1 00:05:00.150 01:46:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:00.151 01:46:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:00.151 [2024-10-09 01:46:19.917393] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:00.151 [2024-10-09 01:46:19.949688] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029bc0/0x617000007fc0) succeed. 00:05:00.151 [2024-10-09 01:46:19.960635] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029d40/0x617000008340) succeed. 00:05:00.151 [2024-10-09 01:46:19.960665] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:05:00.151 [2024-10-09 01:46:19.963458] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:00.151 [2024-10-09 01:46:19.963484] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:00.151 [2024-10-09 01:46:19.965714] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:00.410 01:46:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.410 01:46:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.410 01:46:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.410 01:46:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.669 01:46:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.669 01:46:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.927 01:46:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:00.927 01:46:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:00.927 [2024-10-09 01:46:20.736271] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:01.187 01:46:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:01.187 01:46:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.187 01:46:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 01:46:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:01.187 01:46:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.187 01:46:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 01:46:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:01.187 01:46:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.187 01:46:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.445 MallocBdevForConfigChangeCheck 00:05:01.446 01:46:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:01.446 01:46:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.446 01:46:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.446 01:46:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:01.446 01:46:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.704 01:46:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:01.704 INFO: shutting down applications... 00:05:01.704 01:46:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:01.704 01:46:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:01.704 01:46:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:01.704 01:46:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:05.893 Calling clear_iscsi_subsystem 00:05:05.893 Calling clear_nvmf_subsystem 00:05:05.893 Calling clear_nbd_subsystem 00:05:05.893 Calling clear_ublk_subsystem 00:05:05.893 Calling clear_vhost_blk_subsystem 00:05:05.893 Calling clear_vhost_scsi_subsystem 00:05:05.893 Calling clear_bdev_subsystem 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@352 -- # break 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:05.893 01:46:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:05.893 01:46:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:05.893 01:46:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.893 01:46:25 json_config -- json_config/common.sh@35 -- # [[ -n 3084496 ]] 00:05:05.893 01:46:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3084496 00:05:05.893 01:46:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.893 01:46:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.893 01:46:25 json_config -- json_config/common.sh@41 -- # kill -0 3084496 00:05:05.893 01:46:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.460 01:46:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.460 01:46:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.460 01:46:26 json_config -- json_config/common.sh@41 -- # kill -0 3084496 00:05:06.460 01:46:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.028 01:46:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.028 01:46:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.028 01:46:26 json_config -- json_config/common.sh@41 -- # kill -0 3084496 00:05:07.028 01:46:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.597 01:46:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.597 01:46:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.598 01:46:27 json_config -- json_config/common.sh@41 -- # kill -0 3084496 00:05:07.598 01:46:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.598 01:46:27 json_config -- json_config/common.sh@43 -- # break 00:05:07.598 01:46:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.598 01:46:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.598 SPDK target shutdown done 00:05:07.598 01:46:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:07.598 INFO: relaunching applications... 00:05:07.598 01:46:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.598 01:46:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.598 01:46:27 json_config -- json_config/common.sh@10 -- # shift 00:05:07.598 01:46:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.598 01:46:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.598 01:46:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.598 01:46:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.598 01:46:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.598 01:46:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3089416 00:05:07.598 01:46:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.598 Waiting for target to run... 00:05:07.598 01:46:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.598 01:46:27 json_config -- json_config/common.sh@25 -- # waitforlisten 3089416 /var/tmp/spdk_tgt.sock 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 3089416 ']' 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.598 01:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.598 [2024-10-09 01:46:27.287966] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:07.598 [2024-10-09 01:46:27.288077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089416 ] 00:05:08.166 [2024-10-09 01:46:27.864263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.425 [2024-10-09 01:46:28.039086] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.618 [2024-10-09 01:46:31.645109] transport.c: 288:nvmf_transport_create: *WARNING*: The num_shared_buffers value (4095) is larger than the available iobuf pool size (1024). Please increase the iobuf pool sizes. 00:05:12.618 [2024-10-09 01:46:31.663449] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x61200002a4c0/0x6170000086c0) succeed. 00:05:12.618 [2024-10-09 01:46:31.674416] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x61200002a640/0x617000008a40) succeed. 00:05:12.618 [2024-10-09 01:46:31.677373] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:12.618 [2024-10-09 01:46:31.677406] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:12.618 [2024-10-09 01:46:31.679575] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:12.618 [2024-10-09 01:46:31.707936] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:12.618 01:46:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.618 01:46:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:12.618 01:46:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.618 00:05:12.618 01:46:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:12.618 01:46:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:12.618 INFO: Checking if target configuration is the same... 00:05:12.618 01:46:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.618 01:46:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:12.618 01:46:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.618 + '[' 2 -ne 2 ']' 00:05:12.619 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.619 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:12.619 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:12.619 +++ basename /dev/fd/62 00:05:12.619 ++ mktemp /tmp/62.XXX 00:05:12.619 + tmp_file_1=/tmp/62.hi2 00:05:12.619 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.619 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.619 + tmp_file_2=/tmp/spdk_tgt_config.json.6n3 00:05:12.619 + ret=0 00:05:12.619 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.877 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.877 + diff -u /tmp/62.hi2 /tmp/spdk_tgt_config.json.6n3 00:05:12.877 + echo 'INFO: JSON config files are the same' 00:05:12.877 INFO: JSON config files are the same 00:05:12.877 + rm /tmp/62.hi2 /tmp/spdk_tgt_config.json.6n3 00:05:12.877 + exit 0 00:05:12.877 01:46:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:12.877 01:46:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.877 INFO: changing configuration and checking if this can be detected... 00:05:12.877 01:46:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.878 01:46:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.136 01:46:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.136 01:46:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:13.136 01:46:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.136 + '[' 2 -ne 2 ']' 00:05:13.136 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.136 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:13.136 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:13.136 +++ basename /dev/fd/62 00:05:13.136 ++ mktemp /tmp/62.XXX 00:05:13.136 + tmp_file_1=/tmp/62.Hn8 00:05:13.136 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.136 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.136 + tmp_file_2=/tmp/spdk_tgt_config.json.3Po 00:05:13.136 + ret=0 00:05:13.136 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.394 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.394 + diff -u /tmp/62.Hn8 /tmp/spdk_tgt_config.json.3Po 00:05:13.394 + ret=1 00:05:13.394 + echo '=== Start of file: /tmp/62.Hn8 ===' 00:05:13.394 + cat /tmp/62.Hn8 00:05:13.394 + echo '=== End of file: /tmp/62.Hn8 ===' 00:05:13.394 + echo '' 00:05:13.395 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3Po ===' 00:05:13.395 + cat /tmp/spdk_tgt_config.json.3Po 00:05:13.395 + echo '=== End of file: /tmp/spdk_tgt_config.json.3Po ===' 00:05:13.395 + echo '' 00:05:13.395 + rm /tmp/62.Hn8 /tmp/spdk_tgt_config.json.3Po 00:05:13.395 + exit 1 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:13.395 INFO: configuration change detected. 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 3089416 ]] 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:13.395 01:46:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.395 01:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 01:46:33 json_config -- json_config/json_config.sh@330 -- # killprocess 3089416 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@950 -- # '[' -z 3089416 ']' 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@954 -- # kill -0 3089416 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@955 -- # uname 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3089416 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3089416' 00:05:13.654 killing process with pid 3089416 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@969 -- # kill 3089416 00:05:13.654 01:46:33 json_config -- common/autotest_common.sh@974 -- # wait 3089416 00:05:18.942 01:46:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.942 01:46:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:18.942 01:46:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.942 01:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.942 01:46:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.942 01:46:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.942 INFO: Success 00:05:18.942 01:46:38 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@121 -- # sync 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:18.942 01:46:38 json_config -- nvmf/common.sh@521 -- # [[ '' == \t\c\p ]] 00:05:18.942 00:05:18.942 real 0m31.146s 00:05:18.942 user 0m33.063s 00:05:18.942 sys 0m8.827s 00:05:18.942 01:46:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.942 01:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.942 ************************************ 00:05:18.942 END TEST json_config 00:05:18.942 ************************************ 00:05:18.942 01:46:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.942 01:46:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.942 01:46:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.942 01:46:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.942 ************************************ 00:05:18.942 START TEST json_config_extra_key 00:05:18.942 ************************************ 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.942 01:46:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.942 --rc genhtml_branch_coverage=1 00:05:18.942 --rc genhtml_function_coverage=1 00:05:18.942 --rc genhtml_legend=1 00:05:18.942 --rc geninfo_all_blocks=1 00:05:18.942 --rc geninfo_unexecuted_blocks=1 00:05:18.942 00:05:18.942 ' 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.942 --rc genhtml_branch_coverage=1 00:05:18.942 --rc genhtml_function_coverage=1 00:05:18.942 --rc genhtml_legend=1 00:05:18.942 --rc geninfo_all_blocks=1 00:05:18.942 --rc geninfo_unexecuted_blocks=1 00:05:18.942 00:05:18.942 ' 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.942 --rc genhtml_branch_coverage=1 00:05:18.942 --rc genhtml_function_coverage=1 00:05:18.942 --rc genhtml_legend=1 00:05:18.942 --rc geninfo_all_blocks=1 00:05:18.942 --rc geninfo_unexecuted_blocks=1 00:05:18.942 00:05:18.942 ' 00:05:18.942 01:46:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.942 --rc genhtml_branch_coverage=1 00:05:18.942 --rc genhtml_function_coverage=1 00:05:18.942 --rc genhtml_legend=1 00:05:18.942 --rc geninfo_all_blocks=1 00:05:18.942 --rc geninfo_unexecuted_blocks=1 00:05:18.942 00:05:18.942 ' 00:05:18.942 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.942 01:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.942 01:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:05:18.943 01:46:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.943 01:46:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.943 01:46:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.943 01:46:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.943 01:46:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.943 01:46:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.943 01:46:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.943 01:46:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.943 01:46:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.943 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.943 01:46:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.943 INFO: launching applications... 00:05:18.943 01:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3091036 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.943 Waiting for target to run... 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3091036 /var/tmp/spdk_tgt.sock 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3091036 ']' 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.943 01:46:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.943 01:46:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.943 [2024-10-09 01:46:38.419084] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:18.943 [2024-10-09 01:46:38.419183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091036 ] 00:05:19.228 [2024-10-09 01:46:38.992079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.488 [2024-10-09 01:46:39.178272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.435 01:46:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.435 01:46:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:20.435 01:46:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.435 00:05:20.435 01:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.435 INFO: shutting down applications... 00:05:20.435 01:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3091036 ]] 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3091036 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:20.436 01:46:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.694 01:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.694 01:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.694 01:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:20.694 01:46:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.263 01:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.263 01:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.263 01:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:21.263 01:46:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.832 01:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.832 01:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.832 01:46:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:21.832 01:46:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.400 01:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.400 01:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.400 01:46:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:22.400 01:46:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.666 01:46:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.666 01:46:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.666 01:46:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:22.666 01:46:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3091036 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.236 01:46:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.236 SPDK target shutdown done 00:05:23.236 01:46:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.236 Success 00:05:23.236 00:05:23.236 real 0m4.808s 00:05:23.236 user 0m4.173s 00:05:23.236 sys 0m0.876s 00:05:23.236 01:46:42 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.236 01:46:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.236 ************************************ 00:05:23.236 END TEST json_config_extra_key 00:05:23.236 ************************************ 00:05:23.236 01:46:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.236 01:46:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.236 01:46:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.236 01:46:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.236 ************************************ 00:05:23.236 START TEST alias_rpc 00:05:23.236 ************************************ 00:05:23.236 01:46:43 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.496 * Looking for test storage... 00:05:23.496 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.496 01:46:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.496 --rc genhtml_branch_coverage=1 00:05:23.496 --rc genhtml_function_coverage=1 00:05:23.496 --rc genhtml_legend=1 00:05:23.496 --rc geninfo_all_blocks=1 00:05:23.496 --rc geninfo_unexecuted_blocks=1 00:05:23.496 00:05:23.496 ' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.496 --rc genhtml_branch_coverage=1 00:05:23.496 --rc genhtml_function_coverage=1 00:05:23.496 --rc genhtml_legend=1 00:05:23.496 --rc geninfo_all_blocks=1 00:05:23.496 --rc geninfo_unexecuted_blocks=1 00:05:23.496 00:05:23.496 ' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.496 --rc genhtml_branch_coverage=1 00:05:23.496 --rc genhtml_function_coverage=1 00:05:23.496 --rc genhtml_legend=1 00:05:23.496 --rc geninfo_all_blocks=1 00:05:23.496 --rc geninfo_unexecuted_blocks=1 00:05:23.496 00:05:23.496 ' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.496 --rc genhtml_branch_coverage=1 00:05:23.496 --rc genhtml_function_coverage=1 00:05:23.496 --rc genhtml_legend=1 00:05:23.496 --rc geninfo_all_blocks=1 00:05:23.496 --rc geninfo_unexecuted_blocks=1 00:05:23.496 00:05:23.496 ' 00:05:23.496 01:46:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.496 01:46:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3091704 00:05:23.496 01:46:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3091704 00:05:23.496 01:46:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3091704 ']' 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.496 01:46:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.496 [2024-10-09 01:46:43.308536] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:23.497 [2024-10-09 01:46:43.308662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091704 ] 00:05:23.756 [2024-10-09 01:46:43.435987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.016 [2024-10-09 01:46:43.624434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.585 01:46:44 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.585 01:46:44 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.585 01:46:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.845 01:46:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3091704 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3091704 ']' 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3091704 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3091704 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3091704' 00:05:24.845 killing process with pid 3091704 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@969 -- # kill 3091704 00:05:24.845 01:46:44 alias_rpc -- common/autotest_common.sh@974 -- # wait 3091704 00:05:27.390 00:05:27.390 real 0m4.060s 00:05:27.390 user 0m4.023s 00:05:27.390 sys 0m0.665s 00:05:27.390 01:46:47 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.390 01:46:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.390 ************************************ 00:05:27.390 END TEST alias_rpc 00:05:27.390 ************************************ 00:05:27.390 01:46:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:27.390 01:46:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.390 01:46:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.390 01:46:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.390 01:46:47 -- common/autotest_common.sh@10 -- # set +x 00:05:27.390 ************************************ 00:05:27.390 START TEST spdkcli_tcp 00:05:27.390 ************************************ 00:05:27.390 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.649 * Looking for test storage... 00:05:27.649 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:05:27.649 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.649 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.649 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.649 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:27.649 01:46:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.650 01:46:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.650 --rc genhtml_branch_coverage=1 00:05:27.650 --rc genhtml_function_coverage=1 00:05:27.650 --rc genhtml_legend=1 00:05:27.650 --rc geninfo_all_blocks=1 00:05:27.650 --rc geninfo_unexecuted_blocks=1 00:05:27.650 00:05:27.650 ' 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.650 --rc genhtml_branch_coverage=1 00:05:27.650 --rc genhtml_function_coverage=1 00:05:27.650 --rc genhtml_legend=1 00:05:27.650 --rc geninfo_all_blocks=1 00:05:27.650 --rc geninfo_unexecuted_blocks=1 00:05:27.650 00:05:27.650 ' 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.650 --rc genhtml_branch_coverage=1 00:05:27.650 --rc genhtml_function_coverage=1 00:05:27.650 --rc genhtml_legend=1 00:05:27.650 --rc geninfo_all_blocks=1 00:05:27.650 --rc geninfo_unexecuted_blocks=1 00:05:27.650 00:05:27.650 ' 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.650 --rc genhtml_branch_coverage=1 00:05:27.650 --rc genhtml_function_coverage=1 00:05:27.650 --rc genhtml_legend=1 00:05:27.650 --rc geninfo_all_blocks=1 00:05:27.650 --rc geninfo_unexecuted_blocks=1 00:05:27.650 00:05:27.650 ' 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3092351 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3092351 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3092351 ']' 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.650 01:46:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.650 01:46:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:27.650 [2024-10-09 01:46:47.414691] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:27.650 [2024-10-09 01:46:47.414811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092351 ] 00:05:27.909 [2024-10-09 01:46:47.541388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.169 [2024-10-09 01:46:47.734242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.169 [2024-10-09 01:46:47.734259] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.738 01:46:48 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.738 01:46:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:28.738 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3092451 00:05:28.738 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.738 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.999 [ 00:05:28.999 "bdev_malloc_delete", 00:05:28.999 "bdev_malloc_create", 00:05:28.999 "bdev_null_resize", 00:05:28.999 "bdev_null_delete", 00:05:28.999 "bdev_null_create", 00:05:28.999 "bdev_nvme_cuse_unregister", 00:05:28.999 "bdev_nvme_cuse_register", 00:05:28.999 "bdev_opal_new_user", 00:05:28.999 "bdev_opal_set_lock_state", 00:05:28.999 "bdev_opal_delete", 00:05:28.999 "bdev_opal_get_info", 00:05:28.999 "bdev_opal_create", 00:05:28.999 "bdev_nvme_opal_revert", 00:05:28.999 "bdev_nvme_opal_init", 00:05:28.999 "bdev_nvme_send_cmd", 00:05:28.999 "bdev_nvme_set_keys", 00:05:28.999 "bdev_nvme_get_path_iostat", 00:05:28.999 "bdev_nvme_get_mdns_discovery_info", 00:05:28.999 "bdev_nvme_stop_mdns_discovery", 00:05:28.999 "bdev_nvme_start_mdns_discovery", 00:05:28.999 "bdev_nvme_set_multipath_policy", 00:05:28.999 "bdev_nvme_set_preferred_path", 00:05:28.999 "bdev_nvme_get_io_paths", 00:05:28.999 "bdev_nvme_remove_error_injection", 00:05:28.999 "bdev_nvme_add_error_injection", 00:05:28.999 "bdev_nvme_get_discovery_info", 00:05:28.999 "bdev_nvme_stop_discovery", 00:05:28.999 "bdev_nvme_start_discovery", 00:05:28.999 "bdev_nvme_get_controller_health_info", 00:05:28.999 "bdev_nvme_disable_controller", 00:05:28.999 "bdev_nvme_enable_controller", 00:05:28.999 "bdev_nvme_reset_controller", 00:05:28.999 "bdev_nvme_get_transport_statistics", 00:05:28.999 "bdev_nvme_apply_firmware", 00:05:28.999 "bdev_nvme_detach_controller", 00:05:28.999 "bdev_nvme_get_controllers", 00:05:28.999 "bdev_nvme_attach_controller", 00:05:28.999 "bdev_nvme_set_hotplug", 00:05:28.999 "bdev_nvme_set_options", 00:05:28.999 "bdev_passthru_delete", 00:05:28.999 "bdev_passthru_create", 00:05:28.999 "bdev_lvol_set_parent_bdev", 00:05:28.999 "bdev_lvol_set_parent", 00:05:28.999 "bdev_lvol_check_shallow_copy", 00:05:28.999 "bdev_lvol_start_shallow_copy", 00:05:28.999 "bdev_lvol_grow_lvstore", 00:05:28.999 "bdev_lvol_get_lvols", 00:05:28.999 "bdev_lvol_get_lvstores", 00:05:28.999 "bdev_lvol_delete", 00:05:28.999 "bdev_lvol_set_read_only", 00:05:28.999 "bdev_lvol_resize", 00:05:28.999 "bdev_lvol_decouple_parent", 00:05:28.999 "bdev_lvol_inflate", 00:05:28.999 "bdev_lvol_rename", 00:05:28.999 "bdev_lvol_clone_bdev", 00:05:28.999 "bdev_lvol_clone", 00:05:28.999 "bdev_lvol_snapshot", 00:05:28.999 "bdev_lvol_create", 00:05:28.999 "bdev_lvol_delete_lvstore", 00:05:28.999 "bdev_lvol_rename_lvstore", 00:05:28.999 "bdev_lvol_create_lvstore", 00:05:28.999 "bdev_raid_set_options", 00:05:28.999 "bdev_raid_remove_base_bdev", 00:05:28.999 "bdev_raid_add_base_bdev", 00:05:28.999 "bdev_raid_delete", 00:05:28.999 "bdev_raid_create", 00:05:28.999 "bdev_raid_get_bdevs", 00:05:28.999 "bdev_error_inject_error", 00:05:28.999 "bdev_error_delete", 00:05:28.999 "bdev_error_create", 00:05:28.999 "bdev_split_delete", 00:05:28.999 "bdev_split_create", 00:05:28.999 "bdev_delay_delete", 00:05:28.999 "bdev_delay_create", 00:05:28.999 "bdev_delay_update_latency", 00:05:28.999 "bdev_zone_block_delete", 00:05:28.999 "bdev_zone_block_create", 00:05:28.999 "blobfs_create", 00:05:28.999 "blobfs_detect", 00:05:28.999 "blobfs_set_cache_size", 00:05:28.999 "bdev_aio_delete", 00:05:28.999 "bdev_aio_rescan", 00:05:28.999 "bdev_aio_create", 00:05:28.999 "bdev_ftl_set_property", 00:05:28.999 "bdev_ftl_get_properties", 00:05:28.999 "bdev_ftl_get_stats", 00:05:28.999 "bdev_ftl_unmap", 00:05:28.999 "bdev_ftl_unload", 00:05:28.999 "bdev_ftl_delete", 00:05:28.999 "bdev_ftl_load", 00:05:28.999 "bdev_ftl_create", 00:05:28.999 "bdev_virtio_attach_controller", 00:05:28.999 "bdev_virtio_scsi_get_devices", 00:05:28.999 "bdev_virtio_detach_controller", 00:05:28.999 "bdev_virtio_blk_set_hotplug", 00:05:28.999 "bdev_iscsi_delete", 00:05:28.999 "bdev_iscsi_create", 00:05:28.999 "bdev_iscsi_set_options", 00:05:28.999 "accel_error_inject_error", 00:05:28.999 "ioat_scan_accel_module", 00:05:28.999 "dsa_scan_accel_module", 00:05:28.999 "iaa_scan_accel_module", 00:05:28.999 "keyring_file_remove_key", 00:05:28.999 "keyring_file_add_key", 00:05:28.999 "keyring_linux_set_options", 00:05:28.999 "fsdev_aio_delete", 00:05:28.999 "fsdev_aio_create", 00:05:28.999 "iscsi_get_histogram", 00:05:28.999 "iscsi_enable_histogram", 00:05:28.999 "iscsi_set_options", 00:05:28.999 "iscsi_get_auth_groups", 00:05:28.999 "iscsi_auth_group_remove_secret", 00:05:28.999 "iscsi_auth_group_add_secret", 00:05:28.999 "iscsi_delete_auth_group", 00:05:28.999 "iscsi_create_auth_group", 00:05:28.999 "iscsi_set_discovery_auth", 00:05:28.999 "iscsi_get_options", 00:05:28.999 "iscsi_target_node_request_logout", 00:05:28.999 "iscsi_target_node_set_redirect", 00:05:28.999 "iscsi_target_node_set_auth", 00:05:28.999 "iscsi_target_node_add_lun", 00:05:28.999 "iscsi_get_stats", 00:05:28.999 "iscsi_get_connections", 00:05:28.999 "iscsi_portal_group_set_auth", 00:05:28.999 "iscsi_start_portal_group", 00:05:28.999 "iscsi_delete_portal_group", 00:05:28.999 "iscsi_create_portal_group", 00:05:28.999 "iscsi_get_portal_groups", 00:05:28.999 "iscsi_delete_target_node", 00:05:28.999 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.999 "iscsi_target_node_add_pg_ig_maps", 00:05:28.999 "iscsi_create_target_node", 00:05:28.999 "iscsi_get_target_nodes", 00:05:28.999 "iscsi_delete_initiator_group", 00:05:28.999 "iscsi_initiator_group_remove_initiators", 00:05:28.999 "iscsi_initiator_group_add_initiators", 00:05:28.999 "iscsi_create_initiator_group", 00:05:28.999 "iscsi_get_initiator_groups", 00:05:28.999 "nvmf_set_crdt", 00:05:28.999 "nvmf_set_config", 00:05:28.999 "nvmf_set_max_subsystems", 00:05:28.999 "nvmf_stop_mdns_prr", 00:05:28.999 "nvmf_publish_mdns_prr", 00:05:28.999 "nvmf_subsystem_get_listeners", 00:05:28.999 "nvmf_subsystem_get_qpairs", 00:05:28.999 "nvmf_subsystem_get_controllers", 00:05:28.999 "nvmf_get_stats", 00:05:28.999 "nvmf_get_transports", 00:05:28.999 "nvmf_create_transport", 00:05:28.999 "nvmf_get_targets", 00:05:28.999 "nvmf_delete_target", 00:05:28.999 "nvmf_create_target", 00:05:28.999 "nvmf_subsystem_allow_any_host", 00:05:28.999 "nvmf_subsystem_set_keys", 00:05:28.999 "nvmf_subsystem_remove_host", 00:05:28.999 "nvmf_subsystem_add_host", 00:05:28.999 "nvmf_ns_remove_host", 00:05:28.999 "nvmf_ns_add_host", 00:05:28.999 "nvmf_subsystem_remove_ns", 00:05:28.999 "nvmf_subsystem_set_ns_ana_group", 00:05:28.999 "nvmf_subsystem_add_ns", 00:05:28.999 "nvmf_subsystem_listener_set_ana_state", 00:05:28.999 "nvmf_discovery_get_referrals", 00:05:28.999 "nvmf_discovery_remove_referral", 00:05:28.999 "nvmf_discovery_add_referral", 00:05:28.999 "nvmf_subsystem_remove_listener", 00:05:28.999 "nvmf_subsystem_add_listener", 00:05:28.999 "nvmf_delete_subsystem", 00:05:28.999 "nvmf_create_subsystem", 00:05:28.999 "nvmf_get_subsystems", 00:05:28.999 "env_dpdk_get_mem_stats", 00:05:28.999 "nbd_get_disks", 00:05:28.999 "nbd_stop_disk", 00:05:28.999 "nbd_start_disk", 00:05:28.999 "ublk_recover_disk", 00:05:28.999 "ublk_get_disks", 00:05:28.999 "ublk_stop_disk", 00:05:28.999 "ublk_start_disk", 00:05:28.999 "ublk_destroy_target", 00:05:28.999 "ublk_create_target", 00:05:28.999 "virtio_blk_create_transport", 00:05:28.999 "virtio_blk_get_transports", 00:05:28.999 "vhost_controller_set_coalescing", 00:05:28.999 "vhost_get_controllers", 00:05:28.999 "vhost_delete_controller", 00:05:28.999 "vhost_create_blk_controller", 00:05:28.999 "vhost_scsi_controller_remove_target", 00:05:28.999 "vhost_scsi_controller_add_target", 00:05:28.999 "vhost_start_scsi_controller", 00:05:28.999 "vhost_create_scsi_controller", 00:05:28.999 "thread_set_cpumask", 00:05:28.999 "scheduler_set_options", 00:05:28.999 "framework_get_governor", 00:05:28.999 "framework_get_scheduler", 00:05:28.999 "framework_set_scheduler", 00:05:28.999 "framework_get_reactors", 00:05:28.999 "thread_get_io_channels", 00:05:28.999 "thread_get_pollers", 00:05:28.999 "thread_get_stats", 00:05:28.999 "framework_monitor_context_switch", 00:05:28.999 "spdk_kill_instance", 00:05:28.999 "log_enable_timestamps", 00:05:28.999 "log_get_flags", 00:05:28.999 "log_clear_flag", 00:05:28.999 "log_set_flag", 00:05:28.999 "log_get_level", 00:05:28.999 "log_set_level", 00:05:28.999 "log_get_print_level", 00:05:28.999 "log_set_print_level", 00:05:28.999 "framework_enable_cpumask_locks", 00:05:28.999 "framework_disable_cpumask_locks", 00:05:28.999 "framework_wait_init", 00:05:28.999 "framework_start_init", 00:05:28.999 "scsi_get_devices", 00:05:28.999 "bdev_get_histogram", 00:05:28.999 "bdev_enable_histogram", 00:05:28.999 "bdev_set_qos_limit", 00:05:28.999 "bdev_set_qd_sampling_period", 00:05:28.999 "bdev_get_bdevs", 00:05:28.999 "bdev_reset_iostat", 00:05:28.999 "bdev_get_iostat", 00:05:29.000 "bdev_examine", 00:05:29.000 "bdev_wait_for_examine", 00:05:29.000 "bdev_set_options", 00:05:29.000 "accel_get_stats", 00:05:29.000 "accel_set_options", 00:05:29.000 "accel_set_driver", 00:05:29.000 "accel_crypto_key_destroy", 00:05:29.000 "accel_crypto_keys_get", 00:05:29.000 "accel_crypto_key_create", 00:05:29.000 "accel_assign_opc", 00:05:29.000 "accel_get_module_info", 00:05:29.000 "accel_get_opc_assignments", 00:05:29.000 "vmd_rescan", 00:05:29.000 "vmd_remove_device", 00:05:29.000 "vmd_enable", 00:05:29.000 "sock_get_default_impl", 00:05:29.000 "sock_set_default_impl", 00:05:29.000 "sock_impl_set_options", 00:05:29.000 "sock_impl_get_options", 00:05:29.000 "iobuf_get_stats", 00:05:29.000 "iobuf_set_options", 00:05:29.000 "keyring_get_keys", 00:05:29.000 "framework_get_pci_devices", 00:05:29.000 "framework_get_config", 00:05:29.000 "framework_get_subsystems", 00:05:29.000 "fsdev_set_opts", 00:05:29.000 "fsdev_get_opts", 00:05:29.000 "trace_get_info", 00:05:29.000 "trace_get_tpoint_group_mask", 00:05:29.000 "trace_disable_tpoint_group", 00:05:29.000 "trace_enable_tpoint_group", 00:05:29.000 "trace_clear_tpoint_mask", 00:05:29.000 "trace_set_tpoint_mask", 00:05:29.000 "notify_get_notifications", 00:05:29.000 "notify_get_types", 00:05:29.000 "spdk_get_version", 00:05:29.000 "rpc_get_methods" 00:05:29.000 ] 00:05:29.000 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.000 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.000 01:46:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3092351 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3092351 ']' 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3092351 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3092351 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3092351' 00:05:29.000 killing process with pid 3092351 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3092351 00:05:29.000 01:46:48 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3092351 00:05:31.538 00:05:31.538 real 0m4.143s 00:05:31.538 user 0m7.224s 00:05:31.538 sys 0m0.734s 00:05:31.538 01:46:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.538 01:46:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.538 ************************************ 00:05:31.538 END TEST spdkcli_tcp 00:05:31.538 ************************************ 00:05:31.538 01:46:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.538 01:46:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.538 01:46:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.538 01:46:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.798 ************************************ 00:05:31.798 START TEST dpdk_mem_utility 00:05:31.798 ************************************ 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.798 * Looking for test storage... 00:05:31.798 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.798 01:46:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.798 --rc genhtml_branch_coverage=1 00:05:31.798 --rc genhtml_function_coverage=1 00:05:31.798 --rc genhtml_legend=1 00:05:31.798 --rc geninfo_all_blocks=1 00:05:31.798 --rc geninfo_unexecuted_blocks=1 00:05:31.798 00:05:31.798 ' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.798 --rc genhtml_branch_coverage=1 00:05:31.798 --rc genhtml_function_coverage=1 00:05:31.798 --rc genhtml_legend=1 00:05:31.798 --rc geninfo_all_blocks=1 00:05:31.798 --rc geninfo_unexecuted_blocks=1 00:05:31.798 00:05:31.798 ' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.798 --rc genhtml_branch_coverage=1 00:05:31.798 --rc genhtml_function_coverage=1 00:05:31.798 --rc genhtml_legend=1 00:05:31.798 --rc geninfo_all_blocks=1 00:05:31.798 --rc geninfo_unexecuted_blocks=1 00:05:31.798 00:05:31.798 ' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.798 --rc genhtml_branch_coverage=1 00:05:31.798 --rc genhtml_function_coverage=1 00:05:31.798 --rc genhtml_legend=1 00:05:31.798 --rc geninfo_all_blocks=1 00:05:31.798 --rc geninfo_unexecuted_blocks=1 00:05:31.798 00:05:31.798 ' 00:05:31.798 01:46:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.798 01:46:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3092996 00:05:31.798 01:46:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.798 01:46:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3092996 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3092996 ']' 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.798 01:46:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.798 [2024-10-09 01:46:51.612449] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:31.798 [2024-10-09 01:46:51.612560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092996 ] 00:05:32.058 [2024-10-09 01:46:51.738206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.318 [2024-10-09 01:46:51.924143] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.258 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.258 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:33.258 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.258 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.258 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.258 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.258 { 00:05:33.258 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.258 } 00:05:33.258 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.258 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:33.258 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:33.258 1 heaps totaling size 866.000000 MiB 00:05:33.258 size: 866.000000 MiB heap id: 0 00:05:33.258 end heaps---------- 00:05:33.258 9 mempools totaling size 642.649841 MiB 00:05:33.258 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.258 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.258 size: 92.545471 MiB name: bdev_io_3092996 00:05:33.258 size: 51.011292 MiB name: evtpool_3092996 00:05:33.258 size: 50.003479 MiB name: msgpool_3092996 00:05:33.258 size: 36.509338 MiB name: fsdev_io_3092996 00:05:33.258 size: 21.763794 MiB name: PDU_Pool 00:05:33.258 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.258 size: 0.026123 MiB name: Session_Pool 00:05:33.258 end mempools------- 00:05:33.258 6 memzones totaling size 4.142822 MiB 00:05:33.258 size: 1.000366 MiB name: RG_ring_0_3092996 00:05:33.258 size: 1.000366 MiB name: RG_ring_1_3092996 00:05:33.258 size: 1.000366 MiB name: RG_ring_4_3092996 00:05:33.258 size: 1.000366 MiB name: RG_ring_5_3092996 00:05:33.258 size: 0.125366 MiB name: RG_ring_2_3092996 00:05:33.258 size: 0.015991 MiB name: RG_ring_3_3092996 00:05:33.258 end memzones------- 00:05:33.258 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.258 heap id: 0 total size: 866.000000 MiB number of busy elements: 44 number of free elements: 20 00:05:33.258 list of free elements. size: 19.979797 MiB 00:05:33.258 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:33.258 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:33.258 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:33.258 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:33.258 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:33.258 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:33.258 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:33.258 element at address: 0x20001c400000 with size: 0.999329 MiB 00:05:33.258 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:33.258 element at address: 0x20001bc00000 with size: 0.959900 MiB 00:05:33.258 element at address: 0x20001c700040 with size: 0.937256 MiB 00:05:33.258 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:33.258 element at address: 0x20001de00000 with size: 0.583191 MiB 00:05:33.258 element at address: 0x200003e00000 with size: 0.495300 MiB 00:05:33.258 element at address: 0x20001c000000 with size: 0.491150 MiB 00:05:33.258 element at address: 0x20001c800000 with size: 0.485657 MiB 00:05:33.258 element at address: 0x200015e00000 with size: 0.446167 MiB 00:05:33.258 element at address: 0x20002b200000 with size: 0.411072 MiB 00:05:33.258 element at address: 0x200003a00000 with size: 0.355286 MiB 00:05:33.258 element at address: 0x20000d7ff040 with size: 0.001038 MiB 00:05:33.258 list of standard malloc elements. size: 199.221497 MiB 00:05:33.258 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:33.258 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:33.258 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:33.258 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:33.258 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:33.258 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:33.258 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:33.258 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:33.258 element at address: 0x200015dff040 with size: 0.000427 MiB 00:05:33.258 element at address: 0x200015dffa00 with size: 0.000366 MiB 00:05:33.258 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003aff800 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003efef00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff480 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff580 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff680 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff780 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff880 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ff980 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff200 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff300 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff400 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff500 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff600 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff700 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff800 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dff900 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:33.258 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:33.258 list of memzone associated elements. size: 646.798706 MiB 00:05:33.258 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:33.258 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.258 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:33.258 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.258 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:33.258 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3092996_0 00:05:33.258 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:33.258 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3092996_0 00:05:33.258 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:33.258 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3092996_0 00:05:33.258 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:33.258 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3092996_0 00:05:33.258 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:33.258 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.258 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:33.258 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.258 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:33.258 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3092996 00:05:33.258 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:33.258 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3092996 00:05:33.258 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:33.258 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3092996 00:05:33.258 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:33.258 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.258 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:33.258 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.258 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:33.258 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.258 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:33.258 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.258 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:33.258 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3092996 00:05:33.258 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:33.258 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3092996 00:05:33.258 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:33.258 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3092996 00:05:33.258 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:33.259 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3092996 00:05:33.259 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:05:33.259 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3092996 00:05:33.259 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:05:33.259 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3092996 00:05:33.259 element at address: 0x20001c07dbc0 with size: 0.500549 MiB 00:05:33.259 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.259 element at address: 0x200015e72380 with size: 0.500549 MiB 00:05:33.259 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.259 element at address: 0x20001c87c540 with size: 0.250549 MiB 00:05:33.259 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.259 element at address: 0x200003a5f180 with size: 0.125549 MiB 00:05:33.259 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3092996 00:05:33.259 element at address: 0x20001bcf5bc0 with size: 0.031799 MiB 00:05:33.259 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.259 element at address: 0x20002b2693c0 with size: 0.023804 MiB 00:05:33.259 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.259 element at address: 0x200003a5af40 with size: 0.016174 MiB 00:05:33.259 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3092996 00:05:33.259 element at address: 0x20002b26f540 with size: 0.002502 MiB 00:05:33.259 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.259 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:33.259 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3092996 00:05:33.259 element at address: 0x200003aff900 with size: 0.000366 MiB 00:05:33.259 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3092996 00:05:33.259 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:33.259 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3092996 00:05:33.259 element at address: 0x20000d7ffa80 with size: 0.000366 MiB 00:05:33.259 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.259 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.259 01:46:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3092996 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3092996 ']' 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3092996 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3092996 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3092996' 00:05:33.259 killing process with pid 3092996 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3092996 00:05:33.259 01:46:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3092996 00:05:35.797 00:05:35.797 real 0m3.886s 00:05:35.797 user 0m3.741s 00:05:35.797 sys 0m0.653s 00:05:35.797 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.797 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.797 ************************************ 00:05:35.797 END TEST dpdk_mem_utility 00:05:35.797 ************************************ 00:05:35.797 01:46:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:35.797 01:46:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.797 01:46:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.797 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:05:35.797 ************************************ 00:05:35.797 START TEST event 00:05:35.797 ************************************ 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:35.797 * Looking for test storage... 00:05:35.797 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.797 01:46:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.797 01:46:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.797 01:46:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.797 01:46:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.797 01:46:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.797 01:46:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.797 01:46:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.797 01:46:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.797 01:46:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.797 01:46:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.797 01:46:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.797 01:46:55 event -- scripts/common.sh@344 -- # case "$op" in 00:05:35.797 01:46:55 event -- scripts/common.sh@345 -- # : 1 00:05:35.797 01:46:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.797 01:46:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.797 01:46:55 event -- scripts/common.sh@365 -- # decimal 1 00:05:35.797 01:46:55 event -- scripts/common.sh@353 -- # local d=1 00:05:35.797 01:46:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.797 01:46:55 event -- scripts/common.sh@355 -- # echo 1 00:05:35.797 01:46:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.797 01:46:55 event -- scripts/common.sh@366 -- # decimal 2 00:05:35.797 01:46:55 event -- scripts/common.sh@353 -- # local d=2 00:05:35.797 01:46:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.797 01:46:55 event -- scripts/common.sh@355 -- # echo 2 00:05:35.797 01:46:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.797 01:46:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.797 01:46:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.797 01:46:55 event -- scripts/common.sh@368 -- # return 0 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 01:46:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:35.797 01:46:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.797 01:46:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:35.797 01:46:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.797 01:46:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.797 ************************************ 00:05:35.797 START TEST event_perf 00:05:35.797 ************************************ 00:05:35.797 01:46:55 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.797 Running I/O for 1 seconds...[2024-10-09 01:46:55.567896] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:35.797 [2024-10-09 01:46:55.567999] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093593 ] 00:05:36.056 [2024-10-09 01:46:55.693074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.315 [2024-10-09 01:46:55.888711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.315 [2024-10-09 01:46:55.888761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.315 [2024-10-09 01:46:55.888814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.315 [2024-10-09 01:46:55.888826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.694 Running I/O for 1 seconds... 00:05:37.694 lcore 0: 200771 00:05:37.694 lcore 1: 200770 00:05:37.694 lcore 2: 200771 00:05:37.694 lcore 3: 200771 00:05:37.694 done. 00:05:37.694 00:05:37.694 real 0m1.740s 00:05:37.694 user 0m4.555s 00:05:37.694 sys 0m0.179s 00:05:37.694 01:46:57 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.694 01:46:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.694 ************************************ 00:05:37.694 END TEST event_perf 00:05:37.694 ************************************ 00:05:37.694 01:46:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.694 01:46:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:37.694 01:46:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.694 01:46:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.694 ************************************ 00:05:37.694 START TEST event_reactor 00:05:37.694 ************************************ 00:05:37.694 01:46:57 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.694 [2024-10-09 01:46:57.386313] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:37.694 [2024-10-09 01:46:57.386394] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093801 ] 00:05:37.694 [2024-10-09 01:46:57.510878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.954 [2024-10-09 01:46:57.698461] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.334 test_start 00:05:39.334 oneshot 00:05:39.334 tick 100 00:05:39.334 tick 100 00:05:39.334 tick 250 00:05:39.334 tick 100 00:05:39.334 tick 100 00:05:39.334 tick 100 00:05:39.334 tick 250 00:05:39.334 tick 500 00:05:39.334 tick 100 00:05:39.334 tick 100 00:05:39.334 tick 250 00:05:39.334 tick 100 00:05:39.334 tick 100 00:05:39.334 test_end 00:05:39.334 00:05:39.334 real 0m1.719s 00:05:39.334 user 0m1.558s 00:05:39.334 sys 0m0.153s 00:05:39.334 01:46:59 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.334 01:46:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.334 ************************************ 00:05:39.334 END TEST event_reactor 00:05:39.334 ************************************ 00:05:39.334 01:46:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.334 01:46:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.334 01:46:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.334 01:46:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.334 ************************************ 00:05:39.334 START TEST event_reactor_perf 00:05:39.334 ************************************ 00:05:39.334 01:46:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.595 [2024-10-09 01:46:59.182549] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:39.595 [2024-10-09 01:46:59.182649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094009 ] 00:05:39.595 [2024-10-09 01:46:59.309444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.854 [2024-10-09 01:46:59.505418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.235 test_start 00:05:41.235 test_end 00:05:41.235 Performance: 388632 events per second 00:05:41.235 00:05:41.235 real 0m1.739s 00:05:41.235 user 0m1.567s 00:05:41.235 sys 0m0.163s 00:05:41.235 01:47:00 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.235 01:47:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 ************************************ 00:05:41.235 END TEST event_reactor_perf 00:05:41.235 ************************************ 00:05:41.235 01:47:00 event -- event/event.sh@49 -- # uname -s 00:05:41.235 01:47:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.235 01:47:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.235 01:47:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.235 01:47:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.235 01:47:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 ************************************ 00:05:41.235 START TEST event_scheduler 00:05:41.235 ************************************ 00:05:41.235 01:47:00 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.235 * Looking for test storage... 00:05:41.496 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.496 01:47:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.496 --rc genhtml_branch_coverage=1 00:05:41.496 --rc genhtml_function_coverage=1 00:05:41.496 --rc genhtml_legend=1 00:05:41.496 --rc geninfo_all_blocks=1 00:05:41.496 --rc geninfo_unexecuted_blocks=1 00:05:41.496 00:05:41.496 ' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.496 --rc genhtml_branch_coverage=1 00:05:41.496 --rc genhtml_function_coverage=1 00:05:41.496 --rc genhtml_legend=1 00:05:41.496 --rc geninfo_all_blocks=1 00:05:41.496 --rc geninfo_unexecuted_blocks=1 00:05:41.496 00:05:41.496 ' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.496 --rc genhtml_branch_coverage=1 00:05:41.496 --rc genhtml_function_coverage=1 00:05:41.496 --rc genhtml_legend=1 00:05:41.496 --rc geninfo_all_blocks=1 00:05:41.496 --rc geninfo_unexecuted_blocks=1 00:05:41.496 00:05:41.496 ' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.496 --rc genhtml_branch_coverage=1 00:05:41.496 --rc genhtml_function_coverage=1 00:05:41.496 --rc genhtml_legend=1 00:05:41.496 --rc geninfo_all_blocks=1 00:05:41.496 --rc geninfo_unexecuted_blocks=1 00:05:41.496 00:05:41.496 ' 00:05:41.496 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.496 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3094405 00:05:41.496 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.496 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.496 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3094405 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3094405 ']' 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.496 01:47:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.496 [2024-10-09 01:47:01.230660] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:41.496 [2024-10-09 01:47:01.230766] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094405 ] 00:05:41.756 [2024-10-09 01:47:01.355284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.756 [2024-10-09 01:47:01.554290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.756 [2024-10-09 01:47:01.554351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.756 [2024-10-09 01:47:01.554400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.756 [2024-10-09 01:47:01.554413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:42.326 01:47:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.326 [2024-10-09 01:47:02.064726] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.326 [2024-10-09 01:47:02.064768] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.326 [2024-10-09 01:47:02.064788] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.326 [2024-10-09 01:47:02.064799] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.326 [2024-10-09 01:47:02.064812] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.326 01:47:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.326 01:47:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.585 [2024-10-09 01:47:02.344343] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.585 01:47:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.585 01:47:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.585 01:47:02 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.585 01:47:02 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.585 01:47:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.585 ************************************ 00:05:42.585 START TEST scheduler_create_thread 00:05:42.585 ************************************ 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.585 2 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.585 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 3 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 4 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 5 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 6 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 7 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 8 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 9 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.845 10 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.845 01:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.225 01:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.225 01:47:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.225 01:47:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.225 01:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.225 01:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.220 01:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.220 01:47:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.220 01:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.220 01:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 01:47:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 01:47:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.835 01:47:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.835 01:47:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.835 01:47:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 01:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.772 00:05:46.772 real 0m3.897s 00:05:46.772 user 0m0.024s 00:05:46.772 sys 0m0.009s 00:05:46.772 01:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.772 01:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 ************************************ 00:05:46.772 END TEST scheduler_create_thread 00:05:46.772 ************************************ 00:05:46.772 01:47:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.772 01:47:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3094405 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3094405 ']' 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3094405 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3094405 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3094405' 00:05:46.772 killing process with pid 3094405 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3094405 00:05:46.772 01:47:06 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3094405 00:05:47.031 [2024-10-09 01:47:06.662866] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.410 00:05:48.410 real 0m6.966s 00:05:48.410 user 0m14.277s 00:05:48.410 sys 0m0.588s 00:05:48.410 01:47:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.410 01:47:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.411 ************************************ 00:05:48.411 END TEST event_scheduler 00:05:48.411 ************************************ 00:05:48.411 01:47:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.411 01:47:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.411 01:47:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.411 01:47:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.411 01:47:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.411 ************************************ 00:05:48.411 START TEST app_repeat 00:05:48.411 ************************************ 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3095342 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3095342' 00:05:48.411 Process app_repeat pid: 3095342 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.411 spdk_app_start Round 0 00:05:48.411 01:47:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3095342 /var/tmp/spdk-nbd.sock 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3095342 ']' 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.411 01:47:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.411 [2024-10-09 01:47:08.083845] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:05:48.411 [2024-10-09 01:47:08.083940] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095342 ] 00:05:48.411 [2024-10-09 01:47:08.207438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.669 [2024-10-09 01:47:08.397699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.669 [2024-10-09 01:47:08.397711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.238 01:47:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.238 01:47:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.238 01:47:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.497 Malloc0 00:05:49.497 01:47:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.756 Malloc1 00:05:49.756 01:47:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.756 01:47:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.015 /dev/nbd0 00:05:50.015 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.015 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.015 01:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.016 1+0 records in 00:05:50.016 1+0 records out 00:05:50.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141016 s, 29.0 MB/s 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.016 01:47:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.016 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.016 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.016 01:47:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.275 /dev/nbd1 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.275 1+0 records in 00:05:50.275 1+0 records out 00:05:50.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019232 s, 21.3 MB/s 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.275 01:47:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.275 01:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.534 { 00:05:50.534 "nbd_device": "/dev/nbd0", 00:05:50.534 "bdev_name": "Malloc0" 00:05:50.534 }, 00:05:50.534 { 00:05:50.534 "nbd_device": "/dev/nbd1", 00:05:50.534 "bdev_name": "Malloc1" 00:05:50.534 } 00:05:50.534 ]' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.534 { 00:05:50.534 "nbd_device": "/dev/nbd0", 00:05:50.534 "bdev_name": "Malloc0" 00:05:50.534 }, 00:05:50.534 { 00:05:50.534 "nbd_device": "/dev/nbd1", 00:05:50.534 "bdev_name": "Malloc1" 00:05:50.534 } 00:05:50.534 ]' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.534 /dev/nbd1' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.534 /dev/nbd1' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.534 256+0 records in 00:05:50.534 256+0 records out 00:05:50.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116574 s, 89.9 MB/s 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.534 256+0 records in 00:05:50.534 256+0 records out 00:05:50.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017293 s, 60.6 MB/s 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.534 256+0 records in 00:05:50.534 256+0 records out 00:05:50.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249628 s, 42.0 MB/s 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.534 01:47:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.793 01:47:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.052 01:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.311 01:47:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.311 01:47:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.570 01:47:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.951 [2024-10-09 01:47:12.635210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.210 [2024-10-09 01:47:12.822831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.210 [2024-10-09 01:47:12.822831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.210 [2024-10-09 01:47:13.004575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.210 [2024-10-09 01:47:13.004636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.591 01:47:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.591 01:47:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.591 spdk_app_start Round 1 00:05:54.591 01:47:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3095342 /var/tmp/spdk-nbd.sock 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3095342 ']' 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.591 01:47:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 01:47:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.851 01:47:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.851 01:47:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.110 Malloc0 00:05:55.110 01:47:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.369 Malloc1 00:05:55.369 01:47:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.369 01:47:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.369 01:47:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.629 /dev/nbd0 00:05:55.629 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.629 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.629 1+0 records in 00:05:55.629 1+0 records out 00:05:55.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003894 s, 10.5 MB/s 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.629 01:47:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.629 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.629 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.629 01:47:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.889 /dev/nbd1 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.889 1+0 records in 00:05:55.889 1+0 records out 00:05:55.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271655 s, 15.1 MB/s 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.889 01:47:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.889 01:47:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.889 { 00:05:55.889 "nbd_device": "/dev/nbd0", 00:05:55.889 "bdev_name": "Malloc0" 00:05:55.889 }, 00:05:55.889 { 00:05:55.889 "nbd_device": "/dev/nbd1", 00:05:55.889 "bdev_name": "Malloc1" 00:05:55.889 } 00:05:55.889 ]' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.148 { 00:05:56.148 "nbd_device": "/dev/nbd0", 00:05:56.148 "bdev_name": "Malloc0" 00:05:56.148 }, 00:05:56.148 { 00:05:56.148 "nbd_device": "/dev/nbd1", 00:05:56.148 "bdev_name": "Malloc1" 00:05:56.148 } 00:05:56.148 ]' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.148 /dev/nbd1' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.148 /dev/nbd1' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.148 256+0 records in 00:05:56.148 256+0 records out 00:05:56.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106821 s, 98.2 MB/s 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.148 256+0 records in 00:05:56.148 256+0 records out 00:05:56.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223501 s, 46.9 MB/s 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.148 256+0 records in 00:05:56.148 256+0 records out 00:05:56.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197491 s, 53.1 MB/s 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.148 01:47:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.407 01:47:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.666 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.926 01:47:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.926 01:47:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.185 01:47:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.564 [2024-10-09 01:47:18.226369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.823 [2024-10-09 01:47:18.414790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.823 [2024-10-09 01:47:18.414799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.823 [2024-10-09 01:47:18.590946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.823 [2024-10-09 01:47:18.591000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.204 01:47:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.204 01:47:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.204 spdk_app_start Round 2 00:06:00.204 01:47:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3095342 /var/tmp/spdk-nbd.sock 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3095342 ']' 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.204 01:47:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.464 01:47:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.464 01:47:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.464 01:47:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.723 Malloc0 00:06:00.723 01:47:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.983 Malloc1 00:06:00.983 01:47:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.983 /dev/nbd0 00:06:00.983 01:47:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.243 01:47:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.243 1+0 records in 00:06:01.243 1+0 records out 00:06:01.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263554 s, 15.5 MB/s 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.243 01:47:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.243 01:47:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.243 01:47:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.243 01:47:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.243 /dev/nbd1 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.503 1+0 records in 00:06:01.503 1+0 records out 00:06:01.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280314 s, 14.6 MB/s 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.503 01:47:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.503 { 00:06:01.503 "nbd_device": "/dev/nbd0", 00:06:01.503 "bdev_name": "Malloc0" 00:06:01.503 }, 00:06:01.503 { 00:06:01.503 "nbd_device": "/dev/nbd1", 00:06:01.503 "bdev_name": "Malloc1" 00:06:01.503 } 00:06:01.503 ]' 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.503 { 00:06:01.503 "nbd_device": "/dev/nbd0", 00:06:01.503 "bdev_name": "Malloc0" 00:06:01.503 }, 00:06:01.503 { 00:06:01.503 "nbd_device": "/dev/nbd1", 00:06:01.503 "bdev_name": "Malloc1" 00:06:01.503 } 00:06:01.503 ]' 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.503 /dev/nbd1' 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.503 /dev/nbd1' 00:06:01.503 01:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.763 256+0 records in 00:06:01.763 256+0 records out 00:06:01.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117864 s, 89.0 MB/s 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.763 256+0 records in 00:06:01.763 256+0 records out 00:06:01.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016942 s, 61.9 MB/s 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.763 256+0 records in 00:06:01.763 256+0 records out 00:06:01.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254855 s, 41.1 MB/s 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.763 01:47:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.023 01:47:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.282 01:47:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.282 01:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.541 01:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.541 01:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.542 01:47:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.542 01:47:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.542 01:47:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.542 01:47:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.542 01:47:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.801 01:47:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.180 [2024-10-09 01:47:23.801225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.180 [2024-10-09 01:47:23.989409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.180 [2024-10-09 01:47:23.989409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.439 [2024-10-09 01:47:24.169281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.439 [2024-10-09 01:47:24.169335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.816 01:47:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3095342 /var/tmp/spdk-nbd.sock 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3095342 ']' 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.816 01:47:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:06.075 01:47:25 event.app_repeat -- event/event.sh@39 -- # killprocess 3095342 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3095342 ']' 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3095342 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3095342 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3095342' 00:06:06.075 killing process with pid 3095342 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3095342 00:06:06.075 01:47:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3095342 00:06:07.454 spdk_app_start is called in Round 0. 00:06:07.454 Shutdown signal received, stop current app iteration 00:06:07.454 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 reinitialization... 00:06:07.454 spdk_app_start is called in Round 1. 00:06:07.454 Shutdown signal received, stop current app iteration 00:06:07.454 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 reinitialization... 00:06:07.454 spdk_app_start is called in Round 2. 00:06:07.454 Shutdown signal received, stop current app iteration 00:06:07.454 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 reinitialization... 00:06:07.454 spdk_app_start is called in Round 3. 00:06:07.454 Shutdown signal received, stop current app iteration 00:06:07.454 01:47:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.454 01:47:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:07.454 00:06:07.454 real 0m18.886s 00:06:07.454 user 0m38.423s 00:06:07.454 sys 0m3.249s 00:06:07.454 01:47:26 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.454 01:47:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 ************************************ 00:06:07.454 END TEST app_repeat 00:06:07.454 ************************************ 00:06:07.454 01:47:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.454 01:47:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.454 01:47:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.454 01:47:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.454 01:47:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 ************************************ 00:06:07.454 START TEST cpu_locks 00:06:07.454 ************************************ 00:06:07.454 01:47:26 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.454 * Looking for test storage... 00:06:07.454 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.454 01:47:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.454 --rc genhtml_branch_coverage=1 00:06:07.454 --rc genhtml_function_coverage=1 00:06:07.454 --rc genhtml_legend=1 00:06:07.454 --rc geninfo_all_blocks=1 00:06:07.454 --rc geninfo_unexecuted_blocks=1 00:06:07.454 00:06:07.454 ' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.454 --rc genhtml_branch_coverage=1 00:06:07.454 --rc genhtml_function_coverage=1 00:06:07.454 --rc genhtml_legend=1 00:06:07.454 --rc geninfo_all_blocks=1 00:06:07.454 --rc geninfo_unexecuted_blocks=1 00:06:07.454 00:06:07.454 ' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.454 --rc genhtml_branch_coverage=1 00:06:07.454 --rc genhtml_function_coverage=1 00:06:07.454 --rc genhtml_legend=1 00:06:07.454 --rc geninfo_all_blocks=1 00:06:07.454 --rc geninfo_unexecuted_blocks=1 00:06:07.454 00:06:07.454 ' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.454 --rc genhtml_branch_coverage=1 00:06:07.454 --rc genhtml_function_coverage=1 00:06:07.454 --rc genhtml_legend=1 00:06:07.454 --rc geninfo_all_blocks=1 00:06:07.454 --rc geninfo_unexecuted_blocks=1 00:06:07.454 00:06:07.454 ' 00:06:07.454 01:47:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.454 01:47:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.454 01:47:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.454 01:47:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.454 01:47:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 ************************************ 00:06:07.454 START TEST default_locks 00:06:07.454 ************************************ 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3098042 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3098042 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3098042 ']' 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 01:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.714 [2024-10-09 01:47:27.313323] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:07.714 [2024-10-09 01:47:27.313422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098042 ] 00:06:07.714 [2024-10-09 01:47:27.442113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.973 [2024-10-09 01:47:27.631374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.910 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.911 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:08.911 01:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3098042 00:06:08.911 01:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3098042 00:06:08.911 01:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.170 lslocks: write error 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3098042 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3098042 ']' 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3098042 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3098042 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3098042' 00:06:09.170 killing process with pid 3098042 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3098042 00:06:09.170 01:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3098042 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3098042 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3098042 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3098042 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3098042 ']' 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.728 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3098042) - No such process 00:06:11.728 ERROR: process (pid: 3098042) is no longer running 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.728 00:06:11.728 real 0m4.044s 00:06:11.728 user 0m3.932s 00:06:11.728 sys 0m0.762s 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.728 01:47:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.728 ************************************ 00:06:11.728 END TEST default_locks 00:06:11.728 ************************************ 00:06:11.728 01:47:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.728 01:47:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.728 01:47:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.728 01:47:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.728 ************************************ 00:06:11.728 START TEST default_locks_via_rpc 00:06:11.728 ************************************ 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3098604 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3098604 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3098604 ']' 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.728 01:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.728 [2024-10-09 01:47:31.429244] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:11.728 [2024-10-09 01:47:31.429343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098604 ] 00:06:11.988 [2024-10-09 01:47:31.555643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.988 [2024-10-09 01:47:31.752094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3098604 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3098604 00:06:12.927 01:47:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3098604 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3098604 ']' 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3098604 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3098604 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3098604' 00:06:13.495 killing process with pid 3098604 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3098604 00:06:13.495 01:47:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3098604 00:06:16.033 00:06:16.033 real 0m4.202s 00:06:16.033 user 0m4.156s 00:06:16.033 sys 0m0.788s 00:06:16.033 01:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.033 01:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 ************************************ 00:06:16.033 END TEST default_locks_via_rpc 00:06:16.033 ************************************ 00:06:16.033 01:47:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.033 01:47:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.033 01:47:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.033 01:47:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 ************************************ 00:06:16.033 START TEST non_locking_app_on_locked_coremask 00:06:16.033 ************************************ 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3099245 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3099245 /var/tmp/spdk.sock 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3099245 ']' 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.033 01:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.033 [2024-10-09 01:47:35.711115] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:16.033 [2024-10-09 01:47:35.711234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099245 ] 00:06:16.033 [2024-10-09 01:47:35.840296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.291 [2024-10-09 01:47:36.036757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3099343 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3099343 /var/tmp/spdk2.sock 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3099343 ']' 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.228 01:47:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.228 [2024-10-09 01:47:36.865882] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:17.228 [2024-10-09 01:47:36.865990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099343 ] 00:06:17.228 [2024-10-09 01:47:37.042934] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.228 [2024-10-09 01:47:37.042987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.797 [2024-10-09 01:47:37.432323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.704 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.704 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.704 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3099245 00:06:19.704 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3099245 00:06:19.704 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.272 lslocks: write error 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3099245 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3099245 ']' 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3099245 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.272 01:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099245 00:06:20.272 01:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.272 01:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.272 01:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099245' 00:06:20.272 killing process with pid 3099245 00:06:20.273 01:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3099245 00:06:20.273 01:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3099245 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3099343 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3099343 ']' 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3099343 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099343 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099343' 00:06:25.565 killing process with pid 3099343 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3099343 00:06:25.565 01:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3099343 00:06:28.101 00:06:28.101 real 0m11.701s 00:06:28.101 user 0m11.804s 00:06:28.101 sys 0m1.488s 00:06:28.101 01:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.101 01:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.101 ************************************ 00:06:28.101 END TEST non_locking_app_on_locked_coremask 00:06:28.101 ************************************ 00:06:28.101 01:47:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.101 01:47:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.101 01:47:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.101 01:47:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.101 ************************************ 00:06:28.101 START TEST locking_app_on_unlocked_coremask 00:06:28.101 ************************************ 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3100776 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3100776 /var/tmp/spdk.sock 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3100776 ']' 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.101 01:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.101 [2024-10-09 01:47:47.501682] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:28.101 [2024-10-09 01:47:47.501789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100776 ] 00:06:28.102 [2024-10-09 01:47:47.631844] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.102 [2024-10-09 01:47:47.631891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.102 [2024-10-09 01:47:47.824521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3100960 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3100960 /var/tmp/spdk2.sock 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3100960 ']' 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.039 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.040 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.040 01:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.040 [2024-10-09 01:47:48.688822] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:29.040 [2024-10-09 01:47:48.688925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100960 ] 00:06:29.299 [2024-10-09 01:47:48.861797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.558 [2024-10-09 01:47:49.253167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.464 01:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.464 01:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.464 01:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3100960 00:06:31.464 01:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3100960 00:06:31.464 01:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.897 lslocks: write error 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3100776 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3100776 ']' 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3100776 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3100776 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3100776' 00:06:32.897 killing process with pid 3100776 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3100776 00:06:32.897 01:47:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3100776 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3100960 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3100960 ']' 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3100960 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3100960 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3100960' 00:06:38.240 killing process with pid 3100960 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3100960 00:06:38.240 01:47:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3100960 00:06:40.145 00:06:40.145 real 0m12.537s 00:06:40.145 user 0m12.781s 00:06:40.145 sys 0m1.845s 00:06:40.145 01:47:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.145 01:47:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.145 ************************************ 00:06:40.145 END TEST locking_app_on_unlocked_coremask 00:06:40.145 ************************************ 00:06:40.404 01:47:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.404 01:47:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.404 01:47:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.404 01:47:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.404 ************************************ 00:06:40.404 START TEST locking_app_on_locked_coremask 00:06:40.404 ************************************ 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3102549 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3102549 /var/tmp/spdk.sock 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3102549 ']' 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.404 01:48:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.404 [2024-10-09 01:48:00.103898] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:40.404 [2024-10-09 01:48:00.104003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102549 ] 00:06:40.663 [2024-10-09 01:48:00.233803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.663 [2024-10-09 01:48:00.434446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3102742 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3102742 /var/tmp/spdk2.sock 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3102742 /var/tmp/spdk2.sock 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3102742 /var/tmp/spdk2.sock 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3102742 ']' 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.600 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.600 [2024-10-09 01:48:01.302852] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:41.600 [2024-10-09 01:48:01.302958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102742 ] 00:06:41.858 [2024-10-09 01:48:01.476518] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3102549 has claimed it. 00:06:41.858 [2024-10-09 01:48:01.476582] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.117 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3102742) - No such process 00:06:42.117 ERROR: process (pid: 3102742) is no longer running 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3102549 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3102549 00:06:42.117 01:48:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.053 lslocks: write error 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3102549 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3102549 ']' 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3102549 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3102549 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3102549' 00:06:43.053 killing process with pid 3102549 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3102549 00:06:43.053 01:48:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3102549 00:06:45.584 00:06:45.584 real 0m5.111s 00:06:45.584 user 0m5.227s 00:06:45.584 sys 0m1.112s 00:06:45.584 01:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.584 01:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.584 ************************************ 00:06:45.584 END TEST locking_app_on_locked_coremask 00:06:45.584 ************************************ 00:06:45.584 01:48:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.584 01:48:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.584 01:48:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.584 01:48:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.584 ************************************ 00:06:45.584 START TEST locking_overlapped_coremask 00:06:45.584 ************************************ 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3103710 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3103710 /var/tmp/spdk.sock 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3103710 ']' 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.584 01:48:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.584 [2024-10-09 01:48:05.300457] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:45.584 [2024-10-09 01:48:05.300597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103710 ] 00:06:45.842 [2024-10-09 01:48:05.427225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.842 [2024-10-09 01:48:05.615010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.842 [2024-10-09 01:48:05.615023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.842 [2024-10-09 01:48:05.615030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3103848 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3103848 /var/tmp/spdk2.sock 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3103848 /var/tmp/spdk2.sock 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3103848 /var/tmp/spdk2.sock 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3103848 ']' 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.778 01:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.778 [2024-10-09 01:48:06.498006] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:46.778 [2024-10-09 01:48:06.498119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103848 ] 00:06:47.037 [2024-10-09 01:48:06.676328] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3103710 has claimed it. 00:06:47.037 [2024-10-09 01:48:06.676403] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.605 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3103848) - No such process 00:06:47.605 ERROR: process (pid: 3103848) is no longer running 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3103710 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3103710 ']' 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3103710 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3103710 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3103710' 00:06:47.605 killing process with pid 3103710 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3103710 00:06:47.605 01:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3103710 00:06:50.140 00:06:50.140 real 0m4.482s 00:06:50.140 user 0m11.850s 00:06:50.140 sys 0m0.736s 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.140 ************************************ 00:06:50.140 END TEST locking_overlapped_coremask 00:06:50.140 ************************************ 00:06:50.140 01:48:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.140 01:48:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.140 01:48:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.140 01:48:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.140 ************************************ 00:06:50.140 START TEST locking_overlapped_coremask_via_rpc 00:06:50.140 ************************************ 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3104390 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3104390 /var/tmp/spdk.sock 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3104390 ']' 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.140 01:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.140 [2024-10-09 01:48:09.856084] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:50.140 [2024-10-09 01:48:09.856193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104390 ] 00:06:50.399 [2024-10-09 01:48:09.987134] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.399 [2024-10-09 01:48:09.987187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.399 [2024-10-09 01:48:10.192502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.399 [2024-10-09 01:48:10.192565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.399 [2024-10-09 01:48:10.192572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3104568 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3104568 /var/tmp/spdk2.sock 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3104568 ']' 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.337 01:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.337 [2024-10-09 01:48:11.102760] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:51.337 [2024-10-09 01:48:11.102865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104568 ] 00:06:51.595 [2024-10-09 01:48:11.277556] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.595 [2024-10-09 01:48:11.277611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.163 [2024-10-09 01:48:11.691618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.163 [2024-10-09 01:48:11.691697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.163 [2024-10-09 01:48:11.691727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.070 [2024-10-09 01:48:13.647679] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3104390 has claimed it. 00:06:54.070 request: 00:06:54.070 { 00:06:54.070 "method": "framework_enable_cpumask_locks", 00:06:54.070 "req_id": 1 00:06:54.070 } 00:06:54.070 Got JSON-RPC error response 00:06:54.070 response: 00:06:54.070 { 00:06:54.070 "code": -32603, 00:06:54.070 "message": "Failed to claim CPU core: 2" 00:06:54.070 } 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3104390 /var/tmp/spdk.sock 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3104390 ']' 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3104568 /var/tmp/spdk2.sock 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3104568 ']' 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.070 01:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.329 00:06:54.329 real 0m4.308s 00:06:54.329 user 0m1.164s 00:06:54.329 sys 0m0.244s 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.329 01:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.329 ************************************ 00:06:54.329 END TEST locking_overlapped_coremask_via_rpc 00:06:54.329 ************************************ 00:06:54.329 01:48:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.329 01:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3104390 ]] 00:06:54.329 01:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3104390 00:06:54.329 01:48:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3104390 ']' 00:06:54.329 01:48:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3104390 00:06:54.329 01:48:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:54.329 01:48:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.329 01:48:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3104390 00:06:54.588 01:48:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.588 01:48:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.588 01:48:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3104390' 00:06:54.588 killing process with pid 3104390 00:06:54.588 01:48:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3104390 00:06:54.588 01:48:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3104390 00:06:57.124 01:48:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3104568 ]] 00:06:57.124 01:48:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3104568 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3104568 ']' 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3104568 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3104568 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3104568' 00:06:57.124 killing process with pid 3104568 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3104568 00:06:57.124 01:48:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3104568 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3104390 ]] 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3104390 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3104390 ']' 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3104390 00:06:59.661 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3104390) - No such process 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3104390 is not found' 00:06:59.661 Process with pid 3104390 is not found 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3104568 ]] 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3104568 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3104568 ']' 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3104568 00:06:59.661 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3104568) - No such process 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3104568 is not found' 00:06:59.661 Process with pid 3104568 is not found 00:06:59.661 01:48:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.661 00:06:59.661 real 0m52.345s 00:06:59.661 user 1m27.045s 00:06:59.661 sys 0m8.451s 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.661 01:48:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 END TEST cpu_locks 00:06:59.661 ************************************ 00:06:59.661 00:06:59.661 real 1m24.050s 00:06:59.661 user 2m27.671s 00:06:59.661 sys 0m13.238s 00:06:59.661 01:48:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.661 01:48:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 END TEST event 00:06:59.661 ************************************ 00:06:59.661 01:48:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:59.661 01:48:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.661 01:48:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.661 01:48:19 -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 START TEST thread 00:06:59.661 ************************************ 00:06:59.661 01:48:19 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:59.920 * Looking for test storage... 00:06:59.920 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread 00:06:59.920 01:48:19 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.920 01:48:19 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.921 01:48:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.921 01:48:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.921 01:48:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.921 01:48:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.921 01:48:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.921 01:48:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.921 01:48:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.921 01:48:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.921 01:48:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.921 01:48:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.921 01:48:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.921 01:48:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:59.921 01:48:19 thread -- scripts/common.sh@345 -- # : 1 00:06:59.921 01:48:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.921 01:48:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.921 01:48:19 thread -- scripts/common.sh@365 -- # decimal 1 00:06:59.921 01:48:19 thread -- scripts/common.sh@353 -- # local d=1 00:06:59.921 01:48:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.921 01:48:19 thread -- scripts/common.sh@355 -- # echo 1 00:06:59.921 01:48:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.921 01:48:19 thread -- scripts/common.sh@366 -- # decimal 2 00:06:59.921 01:48:19 thread -- scripts/common.sh@353 -- # local d=2 00:06:59.921 01:48:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.921 01:48:19 thread -- scripts/common.sh@355 -- # echo 2 00:06:59.921 01:48:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.921 01:48:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.921 01:48:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.921 01:48:19 thread -- scripts/common.sh@368 -- # return 0 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.921 --rc genhtml_branch_coverage=1 00:06:59.921 --rc genhtml_function_coverage=1 00:06:59.921 --rc genhtml_legend=1 00:06:59.921 --rc geninfo_all_blocks=1 00:06:59.921 --rc geninfo_unexecuted_blocks=1 00:06:59.921 00:06:59.921 ' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.921 --rc genhtml_branch_coverage=1 00:06:59.921 --rc genhtml_function_coverage=1 00:06:59.921 --rc genhtml_legend=1 00:06:59.921 --rc geninfo_all_blocks=1 00:06:59.921 --rc geninfo_unexecuted_blocks=1 00:06:59.921 00:06:59.921 ' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.921 --rc genhtml_branch_coverage=1 00:06:59.921 --rc genhtml_function_coverage=1 00:06:59.921 --rc genhtml_legend=1 00:06:59.921 --rc geninfo_all_blocks=1 00:06:59.921 --rc geninfo_unexecuted_blocks=1 00:06:59.921 00:06:59.921 ' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.921 --rc genhtml_branch_coverage=1 00:06:59.921 --rc genhtml_function_coverage=1 00:06:59.921 --rc genhtml_legend=1 00:06:59.921 --rc geninfo_all_blocks=1 00:06:59.921 --rc geninfo_unexecuted_blocks=1 00:06:59.921 00:06:59.921 ' 00:06:59.921 01:48:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.921 01:48:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.921 ************************************ 00:06:59.921 START TEST thread_poller_perf 00:06:59.921 ************************************ 00:06:59.921 01:48:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.921 [2024-10-09 01:48:19.697429] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:06:59.921 [2024-10-09 01:48:19.697525] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105740 ] 00:07:00.202 [2024-10-09 01:48:19.825066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.202 [2024-10-09 01:48:20.022293] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.202 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.578 [2024-10-08T23:48:21.398Z] ====================================== 00:07:01.578 [2024-10-08T23:48:21.398Z] busy:2306634926 (cyc) 00:07:01.578 [2024-10-08T23:48:21.398Z] total_run_count: 396000 00:07:01.578 [2024-10-08T23:48:21.398Z] tsc_hz: 2300000000 (cyc) 00:07:01.578 [2024-10-08T23:48:21.398Z] ====================================== 00:07:01.578 [2024-10-08T23:48:21.398Z] poller_cost: 5824 (cyc), 2532 (nsec) 00:07:01.578 00:07:01.578 real 0m1.743s 00:07:01.578 user 0m1.560s 00:07:01.578 sys 0m0.176s 00:07:01.578 01:48:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.578 01:48:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.578 ************************************ 00:07:01.578 END TEST thread_poller_perf 00:07:01.578 ************************************ 00:07:01.837 01:48:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.837 01:48:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:01.837 01:48:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.837 01:48:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.837 ************************************ 00:07:01.837 START TEST thread_poller_perf 00:07:01.837 ************************************ 00:07:01.837 01:48:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.837 [2024-10-09 01:48:21.521989] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:07:01.837 [2024-10-09 01:48:21.522088] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106075 ] 00:07:01.837 [2024-10-09 01:48:21.646612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.095 [2024-10-09 01:48:21.831083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.095 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.472 [2024-10-08T23:48:23.292Z] ====================================== 00:07:03.472 [2024-10-08T23:48:23.292Z] busy:2303499378 (cyc) 00:07:03.472 [2024-10-08T23:48:23.292Z] total_run_count: 5144000 00:07:03.472 [2024-10-08T23:48:23.292Z] tsc_hz: 2300000000 (cyc) 00:07:03.472 [2024-10-08T23:48:23.292Z] ====================================== 00:07:03.472 [2024-10-08T23:48:23.292Z] poller_cost: 447 (cyc), 194 (nsec) 00:07:03.472 00:07:03.472 real 0m1.718s 00:07:03.472 user 0m1.551s 00:07:03.472 sys 0m0.161s 00:07:03.472 01:48:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.472 01:48:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.472 ************************************ 00:07:03.472 END TEST thread_poller_perf 00:07:03.472 ************************************ 00:07:03.472 01:48:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.472 00:07:03.472 real 0m3.779s 00:07:03.472 user 0m3.253s 00:07:03.472 sys 0m0.537s 00:07:03.472 01:48:23 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.472 01:48:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.472 ************************************ 00:07:03.472 END TEST thread 00:07:03.472 ************************************ 00:07:03.472 01:48:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:03.472 01:48:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.472 01:48:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.472 01:48:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.472 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:03.732 ************************************ 00:07:03.732 START TEST app_cmdline 00:07:03.732 ************************************ 00:07:03.732 01:48:23 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.732 * Looking for test storage... 00:07:03.732 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:03.732 01:48:23 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.732 01:48:23 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.732 01:48:23 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.732 01:48:23 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:03.732 01:48:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.733 01:48:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.733 --rc genhtml_branch_coverage=1 00:07:03.733 --rc genhtml_function_coverage=1 00:07:03.733 --rc genhtml_legend=1 00:07:03.733 --rc geninfo_all_blocks=1 00:07:03.733 --rc geninfo_unexecuted_blocks=1 00:07:03.733 00:07:03.733 ' 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.733 --rc genhtml_branch_coverage=1 00:07:03.733 --rc genhtml_function_coverage=1 00:07:03.733 --rc genhtml_legend=1 00:07:03.733 --rc geninfo_all_blocks=1 00:07:03.733 --rc geninfo_unexecuted_blocks=1 00:07:03.733 00:07:03.733 ' 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.733 --rc genhtml_branch_coverage=1 00:07:03.733 --rc genhtml_function_coverage=1 00:07:03.733 --rc genhtml_legend=1 00:07:03.733 --rc geninfo_all_blocks=1 00:07:03.733 --rc geninfo_unexecuted_blocks=1 00:07:03.733 00:07:03.733 ' 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.733 --rc genhtml_branch_coverage=1 00:07:03.733 --rc genhtml_function_coverage=1 00:07:03.733 --rc genhtml_legend=1 00:07:03.733 --rc geninfo_all_blocks=1 00:07:03.733 --rc geninfo_unexecuted_blocks=1 00:07:03.733 00:07:03.733 ' 00:07:03.733 01:48:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.733 01:48:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3106350 00:07:03.733 01:48:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3106350 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3106350 ']' 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.733 01:48:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.733 01:48:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.992 [2024-10-09 01:48:23.598186] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:07:03.992 [2024-10-09 01:48:23.598282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106350 ] 00:07:03.992 [2024-10-09 01:48:23.725742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.250 [2024-10-09 01:48:23.927064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:05.188 { 00:07:05.188 "version": "SPDK v25.01-pre git sha1 92108e0a2", 00:07:05.188 "fields": { 00:07:05.188 "major": 25, 00:07:05.188 "minor": 1, 00:07:05.188 "patch": 0, 00:07:05.188 "suffix": "-pre", 00:07:05.188 "commit": "92108e0a2" 00:07:05.188 } 00:07:05.188 } 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.188 01:48:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:07:05.188 01:48:24 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.447 request: 00:07:05.447 { 00:07:05.447 "method": "env_dpdk_get_mem_stats", 00:07:05.447 "req_id": 1 00:07:05.447 } 00:07:05.447 Got JSON-RPC error response 00:07:05.447 response: 00:07:05.447 { 00:07:05.447 "code": -32601, 00:07:05.447 "message": "Method not found" 00:07:05.447 } 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.447 01:48:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3106350 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3106350 ']' 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3106350 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3106350 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3106350' 00:07:05.447 killing process with pid 3106350 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@969 -- # kill 3106350 00:07:05.447 01:48:25 app_cmdline -- common/autotest_common.sh@974 -- # wait 3106350 00:07:07.979 00:07:07.979 real 0m4.265s 00:07:07.979 user 0m4.391s 00:07:07.979 sys 0m0.717s 00:07:07.979 01:48:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.979 01:48:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.979 ************************************ 00:07:07.979 END TEST app_cmdline 00:07:07.979 ************************************ 00:07:07.979 01:48:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:07:07.979 01:48:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.979 01:48:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.979 01:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:07.979 ************************************ 00:07:07.979 START TEST version 00:07:07.980 ************************************ 00:07:07.980 01:48:27 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:07:07.980 * Looking for test storage... 00:07:07.980 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:07.980 01:48:27 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.980 01:48:27 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.980 01:48:27 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.238 01:48:27 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.238 01:48:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.238 01:48:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.238 01:48:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.238 01:48:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.238 01:48:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.238 01:48:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.238 01:48:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.238 01:48:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.238 01:48:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.238 01:48:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.238 01:48:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.238 01:48:27 version -- scripts/common.sh@344 -- # case "$op" in 00:07:08.238 01:48:27 version -- scripts/common.sh@345 -- # : 1 00:07:08.238 01:48:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.238 01:48:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.238 01:48:27 version -- scripts/common.sh@365 -- # decimal 1 00:07:08.238 01:48:27 version -- scripts/common.sh@353 -- # local d=1 00:07:08.238 01:48:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.238 01:48:27 version -- scripts/common.sh@355 -- # echo 1 00:07:08.238 01:48:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.238 01:48:27 version -- scripts/common.sh@366 -- # decimal 2 00:07:08.238 01:48:27 version -- scripts/common.sh@353 -- # local d=2 00:07:08.238 01:48:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.238 01:48:27 version -- scripts/common.sh@355 -- # echo 2 00:07:08.238 01:48:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.238 01:48:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.239 01:48:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.239 01:48:27 version -- scripts/common.sh@368 -- # return 0 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.239 --rc genhtml_branch_coverage=1 00:07:08.239 --rc genhtml_function_coverage=1 00:07:08.239 --rc genhtml_legend=1 00:07:08.239 --rc geninfo_all_blocks=1 00:07:08.239 --rc geninfo_unexecuted_blocks=1 00:07:08.239 00:07:08.239 ' 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.239 --rc genhtml_branch_coverage=1 00:07:08.239 --rc genhtml_function_coverage=1 00:07:08.239 --rc genhtml_legend=1 00:07:08.239 --rc geninfo_all_blocks=1 00:07:08.239 --rc geninfo_unexecuted_blocks=1 00:07:08.239 00:07:08.239 ' 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.239 --rc genhtml_branch_coverage=1 00:07:08.239 --rc genhtml_function_coverage=1 00:07:08.239 --rc genhtml_legend=1 00:07:08.239 --rc geninfo_all_blocks=1 00:07:08.239 --rc geninfo_unexecuted_blocks=1 00:07:08.239 00:07:08.239 ' 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.239 --rc genhtml_branch_coverage=1 00:07:08.239 --rc genhtml_function_coverage=1 00:07:08.239 --rc genhtml_legend=1 00:07:08.239 --rc geninfo_all_blocks=1 00:07:08.239 --rc geninfo_unexecuted_blocks=1 00:07:08.239 00:07:08.239 ' 00:07:08.239 01:48:27 version -- app/version.sh@17 -- # get_header_version major 00:07:08.239 01:48:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.239 01:48:27 version -- app/version.sh@17 -- # major=25 00:07:08.239 01:48:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.239 01:48:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.239 01:48:27 version -- app/version.sh@18 -- # minor=1 00:07:08.239 01:48:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.239 01:48:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.239 01:48:27 version -- app/version.sh@19 -- # patch=0 00:07:08.239 01:48:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.239 01:48:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.239 01:48:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.239 01:48:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.239 01:48:27 version -- app/version.sh@22 -- # version=25.1 00:07:08.239 01:48:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.239 01:48:27 version -- app/version.sh@28 -- # version=25.1rc0 00:07:08.239 01:48:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:08.239 01:48:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.239 01:48:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:08.239 01:48:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:08.239 00:07:08.239 real 0m0.269s 00:07:08.239 user 0m0.169s 00:07:08.239 sys 0m0.151s 00:07:08.239 01:48:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.239 01:48:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.239 ************************************ 00:07:08.239 END TEST version 00:07:08.239 ************************************ 00:07:08.239 01:48:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:08.239 01:48:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:08.239 01:48:27 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.239 01:48:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.239 01:48:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.239 01:48:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.239 01:48:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.239 01:48:27 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:08.239 01:48:27 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:08.239 01:48:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.239 01:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.239 01:48:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:08.239 01:48:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:08.239 01:48:28 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:08.239 01:48:28 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:08.239 01:48:28 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:07:08.239 01:48:28 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:08.239 01:48:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.239 01:48:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.239 01:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:08.239 ************************************ 00:07:08.239 START TEST nvmf_rdma 00:07:08.239 ************************************ 00:07:08.239 01:48:28 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:08.497 * Looking for test storage... 00:07:08.497 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:07:08.497 01:48:28 nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.498 01:48:28 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.498 --rc genhtml_branch_coverage=1 00:07:08.498 --rc genhtml_function_coverage=1 00:07:08.498 --rc genhtml_legend=1 00:07:08.498 --rc geninfo_all_blocks=1 00:07:08.498 --rc geninfo_unexecuted_blocks=1 00:07:08.498 00:07:08.498 ' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.498 --rc genhtml_branch_coverage=1 00:07:08.498 --rc genhtml_function_coverage=1 00:07:08.498 --rc genhtml_legend=1 00:07:08.498 --rc geninfo_all_blocks=1 00:07:08.498 --rc geninfo_unexecuted_blocks=1 00:07:08.498 00:07:08.498 ' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.498 --rc genhtml_branch_coverage=1 00:07:08.498 --rc genhtml_function_coverage=1 00:07:08.498 --rc genhtml_legend=1 00:07:08.498 --rc geninfo_all_blocks=1 00:07:08.498 --rc geninfo_unexecuted_blocks=1 00:07:08.498 00:07:08.498 ' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.498 --rc genhtml_branch_coverage=1 00:07:08.498 --rc genhtml_function_coverage=1 00:07:08.498 --rc genhtml_legend=1 00:07:08.498 --rc geninfo_all_blocks=1 00:07:08.498 --rc geninfo_unexecuted_blocks=1 00:07:08.498 00:07:08.498 ' 00:07:08.498 01:48:28 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.498 01:48:28 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.498 01:48:28 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.498 01:48:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:08.498 ************************************ 00:07:08.498 START TEST nvmf_target_core 00:07:08.498 ************************************ 00:07:08.498 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:08.757 * Looking for test storage... 00:07:08.757 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.757 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.758 --rc genhtml_branch_coverage=1 00:07:08.758 --rc genhtml_function_coverage=1 00:07:08.758 --rc genhtml_legend=1 00:07:08.758 --rc geninfo_all_blocks=1 00:07:08.758 --rc geninfo_unexecuted_blocks=1 00:07:08.758 00:07:08.758 ' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.758 --rc genhtml_branch_coverage=1 00:07:08.758 --rc genhtml_function_coverage=1 00:07:08.758 --rc genhtml_legend=1 00:07:08.758 --rc geninfo_all_blocks=1 00:07:08.758 --rc geninfo_unexecuted_blocks=1 00:07:08.758 00:07:08.758 ' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.758 --rc genhtml_branch_coverage=1 00:07:08.758 --rc genhtml_function_coverage=1 00:07:08.758 --rc genhtml_legend=1 00:07:08.758 --rc geninfo_all_blocks=1 00:07:08.758 --rc geninfo_unexecuted_blocks=1 00:07:08.758 00:07:08.758 ' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.758 --rc genhtml_branch_coverage=1 00:07:08.758 --rc genhtml_function_coverage=1 00:07:08.758 --rc genhtml_legend=1 00:07:08.758 --rc geninfo_all_blocks=1 00:07:08.758 --rc geninfo_unexecuted_blocks=1 00:07:08.758 00:07:08.758 ' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.758 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.758 ************************************ 00:07:08.758 START TEST nvmf_abort 00:07:08.758 ************************************ 00:07:08.758 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:09.018 * Looking for test storage... 00:07:09.018 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.018 --rc genhtml_branch_coverage=1 00:07:09.018 --rc genhtml_function_coverage=1 00:07:09.018 --rc genhtml_legend=1 00:07:09.018 --rc geninfo_all_blocks=1 00:07:09.018 --rc geninfo_unexecuted_blocks=1 00:07:09.018 00:07:09.018 ' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.018 --rc genhtml_branch_coverage=1 00:07:09.018 --rc genhtml_function_coverage=1 00:07:09.018 --rc genhtml_legend=1 00:07:09.018 --rc geninfo_all_blocks=1 00:07:09.018 --rc geninfo_unexecuted_blocks=1 00:07:09.018 00:07:09.018 ' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.018 --rc genhtml_branch_coverage=1 00:07:09.018 --rc genhtml_function_coverage=1 00:07:09.018 --rc genhtml_legend=1 00:07:09.018 --rc geninfo_all_blocks=1 00:07:09.018 --rc geninfo_unexecuted_blocks=1 00:07:09.018 00:07:09.018 ' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.018 --rc genhtml_branch_coverage=1 00:07:09.018 --rc genhtml_function_coverage=1 00:07:09.018 --rc genhtml_legend=1 00:07:09.018 --rc geninfo_all_blocks=1 00:07:09.018 --rc geninfo_unexecuted_blocks=1 00:07:09.018 00:07:09.018 ' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.018 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:09.018 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.019 01:48:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:07:15.582 Found 0000:18:00.0 (0x8086 - 0x159b) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:07:15.582 Found 0000:18:00.1 (0x8086 - 0x159b) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@403 -- # modinfo irdma 00:07:15.582 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:07:15.583 Found net devices under 0000:18:00.0: cvl_0_0 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:07:15.583 Found net devices under 0000:18:00.1: cvl_0_1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # rdma_device_init 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:07:15.583 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:15.583 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:07:15.583 altname enp24s0f0np0 00:07:15.583 altname ens785f0np0 00:07:15.583 inet 192.168.100.8/24 scope global cvl_0_0 00:07:15.583 valid_lft forever preferred_lft forever 00:07:15.583 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:07:15.583 valid_lft forever preferred_lft forever 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:15.583 01:48:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:07:15.583 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:15.583 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:07:15.583 altname enp24s0f1np1 00:07:15.583 altname ens785f1np1 00:07:15.583 inet 192.168.100.9/24 scope global cvl_0_1 00:07:15.583 valid_lft forever preferred_lft forever 00:07:15.583 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:07:15.583 valid_lft forever preferred_lft forever 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:15.583 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # head -n 1 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # tail -n +2 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # head -n 1 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3110140 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3110140 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3110140 ']' 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.584 01:48:35 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 [2024-10-09 01:48:35.186618] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:07:15.584 [2024-10-09 01:48:35.186750] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.584 [2024-10-09 01:48:35.317876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.842 [2024-10-09 01:48:35.510808] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.842 [2024-10-09 01:48:35.510869] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.842 [2024-10-09 01:48:35.510882] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.842 [2024-10-09 01:48:35.510895] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.842 [2024-10-09 01:48:35.510905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.842 [2024-10-09 01:48:35.512690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.843 [2024-10-09 01:48:35.512750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.843 [2024-10-09 01:48:35.512757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 [2024-10-09 01:48:36.079864] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:07:16.409 [2024-10-09 01:48:36.089471] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:07:16.409 [2024-10-09 01:48:36.089508] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 Malloc0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 Delay0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 [2024-10-09 01:48:36.204842] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.409 01:48:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:16.667 [2024-10-09 01:48:36.337444] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:19.196 Initializing NVMe Controllers 00:07:19.196 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.196 controller IO queue size 128 less than required 00:07:19.196 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:19.196 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:19.196 Initialization complete. Launching workers. 00:07:19.196 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36573 00:07:19.196 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36634, failed to submit 62 00:07:19.196 success 36575, unsuccessful 59, failed 0 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:19.196 rmmod nvme_rdma 00:07:19.196 rmmod nvme_fabrics 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3110140 ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3110140 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3110140 ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3110140 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3110140 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3110140' 00:07:19.196 killing process with pid 3110140 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3110140 00:07:19.196 01:48:38 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3110140 00:07:20.572 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:20.572 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:20.572 00:07:20.572 real 0m11.579s 00:07:20.573 user 0m17.143s 00:07:20.573 sys 0m5.577s 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.573 ************************************ 00:07:20.573 END TEST nvmf_abort 00:07:20.573 ************************************ 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.573 ************************************ 00:07:20.573 START TEST nvmf_ns_hotplug_stress 00:07:20.573 ************************************ 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:20.573 * Looking for test storage... 00:07:20.573 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.573 --rc genhtml_branch_coverage=1 00:07:20.573 --rc genhtml_function_coverage=1 00:07:20.573 --rc genhtml_legend=1 00:07:20.573 --rc geninfo_all_blocks=1 00:07:20.573 --rc geninfo_unexecuted_blocks=1 00:07:20.573 00:07:20.573 ' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.573 --rc genhtml_branch_coverage=1 00:07:20.573 --rc genhtml_function_coverage=1 00:07:20.573 --rc genhtml_legend=1 00:07:20.573 --rc geninfo_all_blocks=1 00:07:20.573 --rc geninfo_unexecuted_blocks=1 00:07:20.573 00:07:20.573 ' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.573 --rc genhtml_branch_coverage=1 00:07:20.573 --rc genhtml_function_coverage=1 00:07:20.573 --rc genhtml_legend=1 00:07:20.573 --rc geninfo_all_blocks=1 00:07:20.573 --rc geninfo_unexecuted_blocks=1 00:07:20.573 00:07:20.573 ' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.573 --rc genhtml_branch_coverage=1 00:07:20.573 --rc genhtml_function_coverage=1 00:07:20.573 --rc genhtml_legend=1 00:07:20.573 --rc geninfo_all_blocks=1 00:07:20.573 --rc geninfo_unexecuted_blocks=1 00:07:20.573 00:07:20.573 ' 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.573 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.833 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.833 01:48:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.400 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:07:27.401 Found 0000:18:00.0 (0x8086 - 0x159b) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:07:27.401 Found 0000:18:00.1 (0x8086 - 0x159b) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # modinfo irdma 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:07:27.401 Found net devices under 0000:18:00.0: cvl_0_0 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:07:27.401 Found net devices under 0000:18:00.1: cvl_0_1 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:27.401 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:07:27.401 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:27.402 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:07:27.402 altname enp24s0f0np0 00:07:27.402 altname ens785f0np0 00:07:27.402 inet 192.168.100.8/24 scope global cvl_0_0 00:07:27.402 valid_lft forever preferred_lft forever 00:07:27.402 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:07:27.402 valid_lft forever preferred_lft forever 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:07:27.402 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:27.402 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:07:27.402 altname enp24s0f1np1 00:07:27.402 altname ens785f1np1 00:07:27.402 inet 192.168.100.9/24 scope global cvl_0_1 00:07:27.402 valid_lft forever preferred_lft forever 00:07:27.402 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:07:27.402 valid_lft forever preferred_lft forever 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:27.402 192.168.100.9' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:27.402 192.168.100.9' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # head -n 1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:27.402 192.168.100.9' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # tail -n +2 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # head -n 1 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3113743 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3113743 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3113743 ']' 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.402 01:48:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [2024-10-09 01:48:46.916518] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:07:27.402 [2024-10-09 01:48:46.916637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.402 [2024-10-09 01:48:47.046697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.661 [2024-10-09 01:48:47.240774] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.661 [2024-10-09 01:48:47.240820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.661 [2024-10-09 01:48:47.240832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.661 [2024-10-09 01:48:47.240845] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.661 [2024-10-09 01:48:47.240857] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.661 [2024-10-09 01:48:47.242454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.661 [2024-10-09 01:48:47.242510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.661 [2024-10-09 01:48:47.242519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.920 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.920 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:27.920 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:27.920 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.920 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.178 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.178 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:28.179 01:48:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:28.179 [2024-10-09 01:48:47.972793] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:07:28.179 [2024-10-09 01:48:47.983300] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:07:28.179 [2024-10-09 01:48:47.983332] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:28.437 01:48:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.437 01:48:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:28.696 [2024-10-09 01:48:48.457242] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:28.696 01:48:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:28.956 01:48:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:29.234 Malloc0 00:07:29.234 01:48:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:29.526 Delay0 00:07:29.526 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.526 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:29.801 NULL1 00:07:29.801 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:30.082 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3114167 00:07:30.082 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:30.082 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:30.082 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.340 01:48:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.340 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:30.340 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:30.597 true 00:07:30.597 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:30.597 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.855 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.113 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:31.113 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:31.113 true 00:07:31.371 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:31.371 01:48:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.371 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.630 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:31.630 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:31.889 true 00:07:31.889 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:31.889 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.148 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.148 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:32.148 01:48:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:32.406 true 00:07:32.406 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:32.406 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.665 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.924 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:32.924 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:32.924 true 00:07:33.183 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:33.183 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.183 01:48:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.441 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:33.441 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:33.700 true 00:07:33.700 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:33.700 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.959 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.218 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:34.218 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:34.218 true 00:07:34.218 01:48:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:34.218 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.476 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.734 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:34.734 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:34.993 true 00:07:34.993 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:34.993 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.251 01:48:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.251 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:35.251 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:35.510 true 00:07:35.510 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:35.510 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.768 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.026 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:36.026 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:36.026 true 00:07:36.026 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:36.026 01:48:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.603 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.603 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:36.603 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:36.861 true 00:07:36.861 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:36.861 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.119 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.377 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:37.377 01:48:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:37.377 true 00:07:37.377 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:37.377 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.635 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.894 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:37.894 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:38.152 true 00:07:38.152 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:38.152 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.410 01:48:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.410 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:38.410 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:38.668 true 00:07:38.668 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:38.668 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.926 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.185 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:39.185 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:39.185 true 00:07:39.185 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:39.185 01:48:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.443 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.702 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:39.702 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:39.960 true 00:07:39.960 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:39.960 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.218 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.218 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:40.218 01:48:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:40.476 true 00:07:40.476 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:40.476 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.734 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.993 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:40.993 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:40.993 true 00:07:40.993 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:41.251 01:49:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.251 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.509 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:41.509 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:41.768 true 00:07:41.768 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:41.768 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.027 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.285 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:42.285 01:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:42.285 true 00:07:42.285 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:42.285 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.544 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.802 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:42.802 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:43.061 true 00:07:43.061 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:43.061 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.319 01:49:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.319 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:43.319 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:43.577 true 00:07:43.577 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:43.577 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.835 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.093 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:44.093 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:44.093 true 00:07:44.351 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:44.351 01:49:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.351 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.609 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.609 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.867 true 00:07:44.867 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:44.867 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.125 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.125 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:45.125 01:49:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:45.384 true 00:07:45.384 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:45.384 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.642 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.900 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:45.900 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:46.158 true 00:07:46.158 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:46.158 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.158 01:49:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.417 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:46.417 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:46.675 true 00:07:46.675 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:46.675 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.933 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.191 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:47.191 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:47.191 true 00:07:47.191 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:47.191 01:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.450 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.708 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:47.708 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:47.966 true 00:07:47.966 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:47.966 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.224 01:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.224 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:48.224 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:48.482 true 00:07:48.482 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:48.482 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.740 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.999 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:48.999 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:48.999 true 00:07:49.257 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:49.257 01:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.257 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.515 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:49.515 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:49.773 true 00:07:49.773 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:49.773 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.031 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.289 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:50.289 01:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:50.289 true 00:07:50.289 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:50.289 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.547 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.805 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:50.805 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:51.063 true 00:07:51.063 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:51.063 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.322 01:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.322 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:51.322 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:51.580 true 00:07:51.580 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:51.580 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.837 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.095 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:52.095 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:52.095 true 00:07:52.354 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:52.354 01:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.354 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.613 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:52.613 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:52.871 true 00:07:52.871 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:52.871 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.129 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.129 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:53.129 01:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:53.388 true 00:07:53.388 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:53.388 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.646 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.904 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:53.904 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:54.162 true 00:07:54.162 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:54.162 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.162 01:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.421 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:54.421 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:54.679 true 00:07:54.679 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:54.679 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.937 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.195 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:55.195 01:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:55.195 true 00:07:55.195 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:55.195 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.454 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.712 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:55.712 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:55.978 true 00:07:55.978 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:55.978 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.236 01:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.236 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:56.236 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:56.494 true 00:07:56.494 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:56.494 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.752 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.010 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:57.010 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:57.010 true 00:07:57.269 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:57.269 01:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.269 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.527 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:57.527 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:57.786 true 00:07:57.786 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:57.786 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.044 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.302 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:58.302 01:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:58.302 true 00:07:58.302 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:58.302 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.560 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.818 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:58.818 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:59.077 true 00:07:59.077 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:59.077 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.077 01:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.335 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:59.335 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:59.594 true 00:07:59.594 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:07:59.594 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.853 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.110 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:00.110 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:00.110 true 00:08:00.110 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:08:00.110 01:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.369 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.628 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:00.628 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:00.887 true 00:08:00.887 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:08:00.887 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.145 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.404 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:01.404 01:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:01.404 Initializing NVMe Controllers 00:08:01.404 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.404 Controller IO queue size 128, less than required. 00:08:01.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.404 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:01.404 Initialization complete. Launching workers. 00:08:01.404 ======================================================== 00:08:01.404 Latency(us) 00:08:01.404 Device Information : IOPS MiB/s Average min max 00:08:01.404 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35380.30 17.28 3617.75 2051.23 5863.58 00:08:01.404 ======================================================== 00:08:01.404 Total : 35380.30 17.28 3617.75 2051.23 5863.58 00:08:01.404 00:08:01.404 true 00:08:01.404 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3114167 00:08:01.404 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3114167) - No such process 00:08:01.404 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3114167 00:08:01.404 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.663 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.922 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:01.922 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:01.922 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:01.922 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.922 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:02.181 null0 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:02.181 null1 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.181 01:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:02.440 null2 00:08:02.440 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.440 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.440 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:02.698 null3 00:08:02.698 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.698 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.698 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:02.956 null4 00:08:02.956 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.956 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.956 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:03.214 null5 00:08:03.214 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.214 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.214 01:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:03.214 null6 00:08:03.214 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.214 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.214 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:03.473 null7 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.473 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3118706 3118707 3118712 3118715 3118717 3118719 3118721 3118723 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.474 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.733 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.992 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.250 01:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.509 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.767 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.026 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.284 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.285 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.285 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.285 01:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.285 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.547 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.829 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.117 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.392 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.392 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.392 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.392 01:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.392 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.669 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.935 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.193 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.194 01:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.452 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:07.710 rmmod nvme_rdma 00:08:07.710 rmmod nvme_fabrics 00:08:07.710 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3113743 ']' 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3113743 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3113743 ']' 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3113743 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3113743 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3113743' 00:08:07.969 killing process with pid 3113743 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3113743 00:08:07.969 01:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3113743 00:08:09.345 01:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:09.345 01:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:09.345 00:08:09.345 real 0m48.799s 00:08:09.345 user 3m32.643s 00:08:09.345 sys 0m16.451s 00:08:09.345 01:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.345 01:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.345 ************************************ 00:08:09.345 END TEST nvmf_ns_hotplug_stress 00:08:09.345 ************************************ 00:08:09.345 01:49:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.345 01:49:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.345 01:49:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.345 01:49:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.345 ************************************ 00:08:09.345 START TEST nvmf_delete_subsystem 00:08:09.345 ************************************ 00:08:09.345 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.345 * Looking for test storage... 00:08:09.604 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.604 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:09.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.605 --rc genhtml_branch_coverage=1 00:08:09.605 --rc genhtml_function_coverage=1 00:08:09.605 --rc genhtml_legend=1 00:08:09.605 --rc geninfo_all_blocks=1 00:08:09.605 --rc geninfo_unexecuted_blocks=1 00:08:09.605 00:08:09.605 ' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:09.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.605 --rc genhtml_branch_coverage=1 00:08:09.605 --rc genhtml_function_coverage=1 00:08:09.605 --rc genhtml_legend=1 00:08:09.605 --rc geninfo_all_blocks=1 00:08:09.605 --rc geninfo_unexecuted_blocks=1 00:08:09.605 00:08:09.605 ' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:09.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.605 --rc genhtml_branch_coverage=1 00:08:09.605 --rc genhtml_function_coverage=1 00:08:09.605 --rc genhtml_legend=1 00:08:09.605 --rc geninfo_all_blocks=1 00:08:09.605 --rc geninfo_unexecuted_blocks=1 00:08:09.605 00:08:09.605 ' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:09.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.605 --rc genhtml_branch_coverage=1 00:08:09.605 --rc genhtml_function_coverage=1 00:08:09.605 --rc genhtml_legend=1 00:08:09.605 --rc geninfo_all_blocks=1 00:08:09.605 --rc geninfo_unexecuted_blocks=1 00:08:09.605 00:08:09.605 ' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.605 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.605 01:49:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:08:16.169 Found 0000:18:00.0 (0x8086 - 0x159b) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:08:16.169 Found 0000:18:00.1 (0x8086 - 0x159b) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # modinfo irdma 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.169 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:08:16.170 Found net devices under 0000:18:00.0: cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:08:16.170 Found net devices under 0000:18:00.1: cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # rdma_device_init 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:16.170 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:16.170 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:08:16.170 altname enp24s0f0np0 00:08:16.170 altname ens785f0np0 00:08:16.170 inet 192.168.100.8/24 scope global cvl_0_0 00:08:16.170 valid_lft forever preferred_lft forever 00:08:16.170 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:08:16.170 valid_lft forever preferred_lft forever 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:16.170 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:16.170 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:08:16.170 altname enp24s0f1np1 00:08:16.170 altname ens785f1np1 00:08:16.170 inet 192.168.100.9/24 scope global cvl_0_1 00:08:16.170 valid_lft forever preferred_lft forever 00:08:16.170 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:08:16.170 valid_lft forever preferred_lft forever 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.170 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.171 192.168.100.9' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:16.171 192.168.100.9' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # head -n 1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:16.171 192.168.100.9' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # tail -n +2 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # head -n 1 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3122665 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3122665 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3122665 ']' 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.171 01:49:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.171 [2024-10-09 01:49:35.845885] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:08:16.171 [2024-10-09 01:49:35.846006] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.171 [2024-10-09 01:49:35.977687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.429 [2024-10-09 01:49:36.178525] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.429 [2024-10-09 01:49:36.178591] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.429 [2024-10-09 01:49:36.178605] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.429 [2024-10-09 01:49:36.178619] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.429 [2024-10-09 01:49:36.178629] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.429 [2024-10-09 01:49:36.180120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.429 [2024-10-09 01:49:36.180133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 [2024-10-09 01:49:36.730201] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028cc0/0x617000007c40) succeed. 00:08:16.995 [2024-10-09 01:49:36.739738] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028e40/0x617000007fc0) succeed. 00:08:16.995 [2024-10-09 01:49:36.739775] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 [2024-10-09 01:49:36.756123] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 NULL1 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 Delay0 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3122850 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:16.995 01:49:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:17.254 [2024-10-09 01:49:36.911258] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:19.155 01:49:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.155 01:49:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.155 01:49:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.721 [2024-10-09 01:49:39.429224] nvme_rdma.c:2447:nvme_rdma_log_wc_status: *ERROR*: WC error, qid 5, qp state 1, request 0x35184374495520 type 1, status: (12): transport retry counter exceeded 00:08:19.721 NVMe io qpair process completion error 00:08:19.721 NVMe io qpair process completion error 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 Read completed with error (sct=0, sc=8) 00:08:19.721 starting I/O failed: -6 00:08:19.721 Write completed with error (sct=0, sc=8) 00:08:20.288 [2024-10-09 01:49:39.992597] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 [2024-10-09 01:49:39.993644] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Write completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 starting I/O failed: -6 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 [2024-10-09 01:49:39.994801] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.288 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 Write completed with error (sct=0, sc=8) 00:08:20.289 Read completed with error (sct=0, sc=8) 00:08:20.289 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.289 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:20.289 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3122850 00:08:20.289 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:20.855 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:20.855 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3122850 00:08:20.855 01:49:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.422 NVMe io qpair process completion error 00:08:21.423 NVMe io qpair process completion error 00:08:21.423 NVMe io qpair process completion error 00:08:21.423 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.423 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3122850 00:08:21.423 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.989 [2024-10-09 01:49:41.528584] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.989 Read completed with error (sct=0, sc=8) 00:08:21.989 Read completed with error (sct=0, sc=8) 00:08:21.989 Write completed with error (sct=0, sc=8) 00:08:21.989 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 [2024-10-09 01:49:41.529405] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 [2024-10-09 01:49:41.530097] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 [2024-10-09 01:49:41.530920] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 Write completed with error (sct=0, sc=8) 00:08:21.990 Read completed with error (sct=0, sc=8) 00:08:21.990 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.990 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3122850 00:08:21.990 Initializing NVMe Controllers 00:08:21.990 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.990 Controller IO queue size 128, less than required. 00:08:21.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:21.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:21.990 Initialization complete. Launching workers. 00:08:21.990 ======================================================== 00:08:21.990 Latency(us) 00:08:21.990 Device Information : IOPS MiB/s Average min max 00:08:21.990 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 141.18 0.07 1323140.44 429971.33 2520093.25 00:08:21.990 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 141.18 0.07 1361735.35 981269.06 2515731.01 00:08:21.990 ======================================================== 00:08:21.990 Total : 282.37 0.14 1342437.90 429971.33 2520093.25 00:08:21.990 00:08:21.990 01:49:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.990 [2024-10-09 01:49:41.537115] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.990 [2024-10-09 01:49:41.562068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:21.990 [2024-10-09 01:49:41.562105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:08:21.990 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:22.248 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:22.248 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3122850 00:08:22.248 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3122850) - No such process 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3122850 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3122850 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3122850 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.249 [2024-10-09 01:49:42.060937] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.249 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3123523 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:22.507 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.507 [2024-10-09 01:49:42.187800] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.765 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.765 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:22.765 01:49:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.331 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.331 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:23.331 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.897 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.897 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:23.897 01:49:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.463 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.463 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:24.463 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.029 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.029 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:25.029 01:49:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.595 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.595 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:25.595 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.854 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.854 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:25.854 01:49:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.419 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.419 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:26.419 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.985 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.985 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:26.985 01:49:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.551 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.551 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:27.551 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.117 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.117 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:28.117 01:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.375 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.375 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:28.375 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.941 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.941 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:28.941 01:49:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.507 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.507 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:29.507 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.765 Initializing NVMe Controllers 00:08:29.765 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.765 Controller IO queue size 128, less than required. 00:08:29.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.765 Initialization complete. Launching workers. 00:08:29.765 ======================================================== 00:08:29.765 Latency(us) 00:08:29.765 Device Information : IOPS MiB/s Average min max 00:08:29.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001473.89 1000067.03 1004765.56 00:08:29.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002748.08 1000078.05 1006558.38 00:08:29.765 ======================================================== 00:08:29.765 Total : 256.00 0.12 1002110.99 1000067.03 1006558.38 00:08:29.765 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3123523 00:08:30.024 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3123523) - No such process 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3123523 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:30.024 rmmod nvme_rdma 00:08:30.024 rmmod nvme_fabrics 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3122665 ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3122665 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3122665 ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3122665 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3122665 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3122665' 00:08:30.024 killing process with pid 3122665 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3122665 00:08:30.024 01:49:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3122665 00:08:31.398 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:31.398 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:31.398 00:08:31.398 real 0m22.013s 00:08:31.398 user 0m53.674s 00:08:31.398 sys 0m6.561s 00:08:31.398 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.398 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.398 ************************************ 00:08:31.398 END TEST nvmf_delete_subsystem 00:08:31.398 ************************************ 00:08:31.399 01:49:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:31.399 01:49:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.399 01:49:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.399 01:49:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.399 ************************************ 00:08:31.399 START TEST nvmf_host_management 00:08:31.399 ************************************ 00:08:31.399 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:31.657 * Looking for test storage... 00:08:31.657 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.657 --rc genhtml_branch_coverage=1 00:08:31.657 --rc genhtml_function_coverage=1 00:08:31.657 --rc genhtml_legend=1 00:08:31.657 --rc geninfo_all_blocks=1 00:08:31.657 --rc geninfo_unexecuted_blocks=1 00:08:31.657 00:08:31.657 ' 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.657 --rc genhtml_branch_coverage=1 00:08:31.657 --rc genhtml_function_coverage=1 00:08:31.657 --rc genhtml_legend=1 00:08:31.657 --rc geninfo_all_blocks=1 00:08:31.657 --rc geninfo_unexecuted_blocks=1 00:08:31.657 00:08:31.657 ' 00:08:31.657 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.658 --rc genhtml_branch_coverage=1 00:08:31.658 --rc genhtml_function_coverage=1 00:08:31.658 --rc genhtml_legend=1 00:08:31.658 --rc geninfo_all_blocks=1 00:08:31.658 --rc geninfo_unexecuted_blocks=1 00:08:31.658 00:08:31.658 ' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.658 --rc genhtml_branch_coverage=1 00:08:31.658 --rc genhtml_function_coverage=1 00:08:31.658 --rc genhtml_legend=1 00:08:31.658 --rc geninfo_all_blocks=1 00:08:31.658 --rc geninfo_unexecuted_blocks=1 00:08:31.658 00:08:31.658 ' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.658 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.658 01:49:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:08:38.216 Found 0000:18:00.0 (0x8086 - 0x159b) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:08:38.216 Found 0000:18:00.1 (0x8086 - 0x159b) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@403 -- # modinfo irdma 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:08:38.216 Found net devices under 0000:18:00.0: cvl_0_0 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:08:38.216 Found net devices under 0000:18:00.1: cvl_0_1 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # rdma_device_init 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:38.216 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:38.217 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:38.217 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:08:38.217 altname enp24s0f0np0 00:08:38.217 altname ens785f0np0 00:08:38.217 inet 192.168.100.8/24 scope global cvl_0_0 00:08:38.217 valid_lft forever preferred_lft forever 00:08:38.217 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:08:38.217 valid_lft forever preferred_lft forever 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:38.217 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:38.217 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:08:38.217 altname enp24s0f1np1 00:08:38.217 altname ens785f1np1 00:08:38.217 inet 192.168.100.9/24 scope global cvl_0_1 00:08:38.217 valid_lft forever preferred_lft forever 00:08:38.217 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:08:38.217 valid_lft forever preferred_lft forever 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.217 192.168.100.9' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # head -n 1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:38.217 192.168.100.9' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:38.217 192.168.100.9' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # tail -n +2 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # head -n 1 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3127581 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3127581 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3127581 ']' 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.217 01:49:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.217 [2024-10-09 01:49:57.678558] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:08:38.217 [2024-10-09 01:49:57.678672] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.217 [2024-10-09 01:49:57.810054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.217 [2024-10-09 01:49:58.001956] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.217 [2024-10-09 01:49:58.002013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.217 [2024-10-09 01:49:58.002024] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.218 [2024-10-09 01:49:58.002037] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.218 [2024-10-09 01:49:58.002047] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.218 [2024-10-09 01:49:58.004463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.218 [2024-10-09 01:49:58.004526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.218 [2024-10-09 01:49:58.004608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.218 [2024-10-09 01:49:58.004631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.784 [2024-10-09 01:49:58.553150] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029440/0x617000007c40) succeed. 00:08:38.784 [2024-10-09 01:49:58.562900] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:08:38.784 [2024-10-09 01:49:58.562934] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.784 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.043 Malloc0 00:08:39.043 [2024-10-09 01:49:58.682408] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3127796 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3127796 /var/tmp/bdevperf.sock 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3127796 ']' 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.043 { 00:08:39.043 "params": { 00:08:39.043 "name": "Nvme$subsystem", 00:08:39.043 "trtype": "$TEST_TRANSPORT", 00:08:39.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.043 "adrfam": "ipv4", 00:08:39.043 "trsvcid": "$NVMF_PORT", 00:08:39.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.043 "hdgst": ${hdgst:-false}, 00:08:39.043 "ddgst": ${ddgst:-false} 00:08:39.043 }, 00:08:39.043 "method": "bdev_nvme_attach_controller" 00:08:39.043 } 00:08:39.043 EOF 00:08:39.043 )") 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:39.043 01:49:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.043 "params": { 00:08:39.043 "name": "Nvme0", 00:08:39.043 "trtype": "rdma", 00:08:39.043 "traddr": "192.168.100.8", 00:08:39.043 "adrfam": "ipv4", 00:08:39.043 "trsvcid": "4420", 00:08:39.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.043 "hdgst": false, 00:08:39.043 "ddgst": false 00:08:39.043 }, 00:08:39.043 "method": "bdev_nvme_attach_controller" 00:08:39.043 }' 00:08:39.043 [2024-10-09 01:49:58.830855] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:08:39.043 [2024-10-09 01:49:58.830959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127796 ] 00:08:39.336 [2024-10-09 01:49:58.958140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.605 [2024-10-09 01:49:59.165202] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.863 Running I/O for 10 seconds... 00:08:39.863 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.863 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:39.863 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:39.863 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.863 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=318 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 318 -ge 100 ']' 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 01:49:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:40.690 [2024-10-09 01:50:00.279582] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:40.690 [2024-10-09 01:50:00.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edfc40 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecfb80 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebfac0 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eafa00 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f940 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f880 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f7c0 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f700 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f640 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f580 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.279978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f4c0 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.279992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f400 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f340 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f280 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1d2e00 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1c2d40 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1b2c80 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1a2bc0 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a192b00 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a182a40 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a172980 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.690 [2024-10-09 01:50:00.280295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1628c0 len:0x10000 key:0x79d7c44d 00:08:40.690 [2024-10-09 01:50:00.280307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a152800 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a142740 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a132680 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1225c0 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a112500 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4ff000 len:0x10000 key:0xef81561b 00:08:40.691 [2024-10-09 01:50:00.280478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d520000 len:0x10000 key:0xef81561b 00:08:40.691 [2024-10-09 01:50:00.280506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d541000 len:0x10000 key:0xef81561b 00:08:40.691 [2024-10-09 01:50:00.280533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d562000 len:0x10000 key:0xef81561b 00:08:40.691 [2024-10-09 01:50:00.280566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d583000 len:0x10000 key:0xef81561b 00:08:40.691 [2024-10-09 01:50:00.280593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a102440 len:0x10000 key:0x79d7c44d 00:08:40.691 [2024-10-09 01:50:00.280621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019deffc0 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff00 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcfe40 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfd80 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafcc0 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fc00 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fb40 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fa80 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6f9c0 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5f900 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4f840 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3f780 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2f6c0 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.280982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.280998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f600 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.281011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f540 len:0x10000 key:0x406c26b2 00:08:40.691 [2024-10-09 01:50:00.281040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199effc0 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff00 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cfe40 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfd80 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afcc0 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fc00 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fb40 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fa80 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996f9c0 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995f900 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994f840 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993f780 len:0x10000 key:0xf12ff0a2 00:08:40.691 [2024-10-09 01:50:00.281354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.691 [2024-10-09 01:50:00.281368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992f6c0 len:0x10000 key:0xf12ff0a2 00:08:40.692 [2024-10-09 01:50:00.281380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.692 [2024-10-09 01:50:00.281394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f600 len:0x10000 key:0xf12ff0a2 00:08:40.692 [2024-10-09 01:50:00.281407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.692 [2024-10-09 01:50:00.281423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f540 len:0x10000 key:0xf12ff0a2 00:08:40.692 [2024-10-09 01:50:00.281436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.692 [2024-10-09 01:50:00.281451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b380 len:0x10000 key:0xd145200a 00:08:40.692 [2024-10-09 01:50:00.281464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.692 [2024-10-09 01:50:00.282168] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a102140 was disconnected and freed. reset controller. 00:08:40.692 [2024-10-09 01:50:00.283192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:40.692 task offset: 57344 on job bdev=Nvme0n1 fails 00:08:40.692 00:08:40.692 Latency(us) 00:08:40.692 [2024-10-08T23:50:00.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.692 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.692 Job: Nvme0n1 ended in about 0.71 seconds with error 00:08:40.692 Verification LBA range: start 0x0 length 0x400 00:08:40.692 Nvme0n1 : 0.71 625.08 39.07 90.30 0.00 88152.40 2208.28 550730.35 00:08:40.692 [2024-10-08T23:50:00.512Z] =================================================================================================================== 00:08:40.692 [2024-10-08T23:50:00.512Z] Total : 625.08 39.07 90.30 0.00 88152.40 2208.28 550730.35 00:08:40.692 [2024-10-09 01:50:00.288869] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.692 [2024-10-09 01:50:00.288904] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:40.692 [2024-10-09 01:50:00.315954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:40.692 [2024-10-09 01:50:00.336935] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.951 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3127796 00:08:40.951 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:40.951 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.209 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.209 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:41.209 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:41.209 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:41.209 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:41.209 { 00:08:41.209 "params": { 00:08:41.209 "name": "Nvme$subsystem", 00:08:41.209 "trtype": "$TEST_TRANSPORT", 00:08:41.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.209 "adrfam": "ipv4", 00:08:41.209 "trsvcid": "$NVMF_PORT", 00:08:41.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.209 "hdgst": ${hdgst:-false}, 00:08:41.209 "ddgst": ${ddgst:-false} 00:08:41.210 }, 00:08:41.210 "method": "bdev_nvme_attach_controller" 00:08:41.210 } 00:08:41.210 EOF 00:08:41.210 )") 00:08:41.210 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:41.210 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:41.210 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:41.210 01:50:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:41.210 "params": { 00:08:41.210 "name": "Nvme0", 00:08:41.210 "trtype": "rdma", 00:08:41.210 "traddr": "192.168.100.8", 00:08:41.210 "adrfam": "ipv4", 00:08:41.210 "trsvcid": "4420", 00:08:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.210 "hdgst": false, 00:08:41.210 "ddgst": false 00:08:41.210 }, 00:08:41.210 "method": "bdev_nvme_attach_controller" 00:08:41.210 }' 00:08:41.210 [2024-10-09 01:50:00.855251] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:08:41.210 [2024-10-09 01:50:00.855351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128028 ] 00:08:41.210 [2024-10-09 01:50:00.983319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.469 [2024-10-09 01:50:01.184888] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.035 Running I/O for 1 seconds... 00:08:42.969 2752.00 IOPS, 172.00 MiB/s 00:08:42.969 Latency(us) 00:08:42.969 [2024-10-08T23:50:02.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.969 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.969 Verification LBA range: start 0x0 length 0x400 00:08:42.969 Nvme0n1 : 1.02 2801.94 175.12 0.00 0.00 22346.79 1624.15 36700.16 00:08:42.969 [2024-10-08T23:50:02.789Z] =================================================================================================================== 00:08:42.969 [2024-10-08T23:50:02.789Z] Total : 2801.94 175.12 0.00 0.00 22346.79 1624.15 36700.16 00:08:43.902 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3127796 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:43.902 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:43.902 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.903 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:43.903 rmmod nvme_rdma 00:08:43.903 rmmod nvme_fabrics 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3127581 ']' 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3127581 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3127581 ']' 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3127581 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3127581 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3127581' 00:08:44.161 killing process with pid 3127581 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3127581 00:08:44.161 01:50:03 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3127581 00:08:45.536 [2024-10-09 01:50:05.220142] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:45.536 00:08:45.536 real 0m14.132s 00:08:45.536 user 0m34.343s 00:08:45.536 sys 0m6.211s 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.536 ************************************ 00:08:45.536 END TEST nvmf_host_management 00:08:45.536 ************************************ 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.536 01:50:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.796 ************************************ 00:08:45.796 START TEST nvmf_lvol 00:08:45.796 ************************************ 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:45.796 * Looking for test storage... 00:08:45.796 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.796 --rc genhtml_branch_coverage=1 00:08:45.796 --rc genhtml_function_coverage=1 00:08:45.796 --rc genhtml_legend=1 00:08:45.796 --rc geninfo_all_blocks=1 00:08:45.796 --rc geninfo_unexecuted_blocks=1 00:08:45.796 00:08:45.796 ' 00:08:45.796 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.796 --rc genhtml_branch_coverage=1 00:08:45.796 --rc genhtml_function_coverage=1 00:08:45.796 --rc genhtml_legend=1 00:08:45.796 --rc geninfo_all_blocks=1 00:08:45.797 --rc geninfo_unexecuted_blocks=1 00:08:45.797 00:08:45.797 ' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:45.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.797 --rc genhtml_branch_coverage=1 00:08:45.797 --rc genhtml_function_coverage=1 00:08:45.797 --rc genhtml_legend=1 00:08:45.797 --rc geninfo_all_blocks=1 00:08:45.797 --rc geninfo_unexecuted_blocks=1 00:08:45.797 00:08:45.797 ' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:45.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.797 --rc genhtml_branch_coverage=1 00:08:45.797 --rc genhtml_function_coverage=1 00:08:45.797 --rc genhtml_legend=1 00:08:45.797 --rc geninfo_all_blocks=1 00:08:45.797 --rc geninfo_unexecuted_blocks=1 00:08:45.797 00:08:45.797 ' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.797 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.797 01:50:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:08:52.359 Found 0000:18:00.0 (0x8086 - 0x159b) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:08:52.359 Found 0000:18:00.1 (0x8086 - 0x159b) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@403 -- # modinfo irdma 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:52.359 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:08:52.360 Found net devices under 0000:18:00.0: cvl_0_0 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:08:52.360 Found net devices under 0000:18:00.1: cvl_0_1 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # rdma_device_init 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:52.360 01:50:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:52.360 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:52.360 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:08:52.360 altname enp24s0f0np0 00:08:52.360 altname ens785f0np0 00:08:52.360 inet 192.168.100.8/24 scope global cvl_0_0 00:08:52.360 valid_lft forever preferred_lft forever 00:08:52.360 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:08:52.360 valid_lft forever preferred_lft forever 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:52.360 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:52.360 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:08:52.360 altname enp24s0f1np1 00:08:52.360 altname ens785f1np1 00:08:52.360 inet 192.168.100.9/24 scope global cvl_0_1 00:08:52.360 valid_lft forever preferred_lft forever 00:08:52.360 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:08:52.360 valid_lft forever preferred_lft forever 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.360 192.168.100.9' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:52.360 192.168.100.9' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # head -n 1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:52.360 192.168.100.9' 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # head -n 1 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # tail -n +2 00:08:52.360 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3131612 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3131612 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3131612 ']' 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.361 01:50:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.620 [2024-10-09 01:50:12.242929] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:08:52.620 [2024-10-09 01:50:12.243055] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.620 [2024-10-09 01:50:12.374298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.878 [2024-10-09 01:50:12.580361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.878 [2024-10-09 01:50:12.580419] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.878 [2024-10-09 01:50:12.580434] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.878 [2024-10-09 01:50:12.580449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.878 [2024-10-09 01:50:12.580460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.878 [2024-10-09 01:50:12.582043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.878 [2024-10-09 01:50:12.582108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.878 [2024-10-09 01:50:12.582110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.445 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:53.703 [2024-10-09 01:50:13.287230] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:08:53.703 [2024-10-09 01:50:13.296769] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:08:53.703 [2024-10-09 01:50:13.296805] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:53.703 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.961 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:53.961 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.219 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:54.219 01:50:13 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:54.478 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:54.478 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e49bc2e7-8477-4d2a-8aae-630d686aeb75 00:08:54.478 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e49bc2e7-8477-4d2a-8aae-630d686aeb75 lvol 20 00:08:54.736 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=597d0b1f-8ace-4629-a4e0-41c471550fc9 00:08:54.736 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.993 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 597d0b1f-8ace-4629-a4e0-41c471550fc9 00:08:55.252 01:50:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:55.252 [2024-10-09 01:50:15.052429] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:55.510 01:50:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:55.510 01:50:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3132006 00:08:55.510 01:50:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:55.510 01:50:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:56.885 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 597d0b1f-8ace-4629-a4e0-41c471550fc9 MY_SNAPSHOT 00:08:56.885 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b9560236-7d73-489c-aedd-bd7453bbc1ba 00:08:56.885 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 597d0b1f-8ace-4629-a4e0-41c471550fc9 30 00:08:57.143 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b9560236-7d73-489c-aedd-bd7453bbc1ba MY_CLONE 00:08:57.143 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28f53592-97d1-4935-94ac-d0ea5137dafb 00:08:57.143 01:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28f53592-97d1-4935-94ac-d0ea5137dafb 00:08:57.710 01:50:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3132006 00:09:07.681 Initializing NVMe Controllers 00:09:07.681 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:07.681 Controller IO queue size 128, less than required. 00:09:07.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:07.681 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:07.681 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:07.681 Initialization complete. Launching workers. 00:09:07.681 ======================================================== 00:09:07.681 Latency(us) 00:09:07.681 Device Information : IOPS MiB/s Average min max 00:09:07.681 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15373.70 60.05 8327.59 3534.80 149257.81 00:09:07.681 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15290.10 59.73 8372.73 4183.52 133508.99 00:09:07.681 ======================================================== 00:09:07.681 Total : 30663.80 119.78 8350.10 3534.80 149257.81 00:09:07.681 00:09:07.681 01:50:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.681 01:50:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 597d0b1f-8ace-4629-a4e0-41c471550fc9 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e49bc2e7-8477-4d2a-8aae-630d686aeb75 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:07.681 rmmod nvme_rdma 00:09:07.681 rmmod nvme_fabrics 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3131612 ']' 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3131612 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3131612 ']' 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3131612 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.681 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3131612 00:09:07.940 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.940 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.940 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3131612' 00:09:07.940 killing process with pid 3131612 00:09:07.940 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3131612 00:09:07.940 01:50:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3131612 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:09.841 00:09:09.841 real 0m23.817s 00:09:09.841 user 1m15.720s 00:09:09.841 sys 0m6.565s 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.841 ************************************ 00:09:09.841 END TEST nvmf_lvol 00:09:09.841 ************************************ 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.841 ************************************ 00:09:09.841 START TEST nvmf_lvs_grow 00:09:09.841 ************************************ 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:09.841 * Looking for test storage... 00:09:09.841 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:09.841 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.842 --rc genhtml_branch_coverage=1 00:09:09.842 --rc genhtml_function_coverage=1 00:09:09.842 --rc genhtml_legend=1 00:09:09.842 --rc geninfo_all_blocks=1 00:09:09.842 --rc geninfo_unexecuted_blocks=1 00:09:09.842 00:09:09.842 ' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.842 --rc genhtml_branch_coverage=1 00:09:09.842 --rc genhtml_function_coverage=1 00:09:09.842 --rc genhtml_legend=1 00:09:09.842 --rc geninfo_all_blocks=1 00:09:09.842 --rc geninfo_unexecuted_blocks=1 00:09:09.842 00:09:09.842 ' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.842 --rc genhtml_branch_coverage=1 00:09:09.842 --rc genhtml_function_coverage=1 00:09:09.842 --rc genhtml_legend=1 00:09:09.842 --rc geninfo_all_blocks=1 00:09:09.842 --rc geninfo_unexecuted_blocks=1 00:09:09.842 00:09:09.842 ' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.842 --rc genhtml_branch_coverage=1 00:09:09.842 --rc genhtml_function_coverage=1 00:09:09.842 --rc genhtml_legend=1 00:09:09.842 --rc geninfo_all_blocks=1 00:09:09.842 --rc geninfo_unexecuted_blocks=1 00:09:09.842 00:09:09.842 ' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.842 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.842 01:50:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:09:16.428 Found 0000:18:00.0 (0x8086 - 0x159b) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:09:16.428 Found 0000:18:00.1 (0x8086 - 0x159b) 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.428 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@403 -- # modinfo irdma 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:09:16.429 Found net devices under 0000:18:00.0: cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:09:16.429 Found net devices under 0000:18:00.1: cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # rdma_device_init 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:09:16.429 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:16.429 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:09:16.429 altname enp24s0f0np0 00:09:16.429 altname ens785f0np0 00:09:16.429 inet 192.168.100.8/24 scope global cvl_0_0 00:09:16.429 valid_lft forever preferred_lft forever 00:09:16.429 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:09:16.429 valid_lft forever preferred_lft forever 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:09:16.429 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:16.429 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:09:16.429 altname enp24s0f1np1 00:09:16.429 altname ens785f1np1 00:09:16.429 inet 192.168.100.9/24 scope global cvl_0_1 00:09:16.429 valid_lft forever preferred_lft forever 00:09:16.429 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:09:16.429 valid_lft forever preferred_lft forever 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:16.429 192.168.100.9' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:16.429 192.168.100.9' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # head -n 1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:16.429 192.168.100.9' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # tail -n +2 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # head -n 1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3136747 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3136747 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3136747 ']' 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.429 01:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.429 [2024-10-09 01:50:35.765022] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:09:16.429 [2024-10-09 01:50:35.765127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.429 [2024-10-09 01:50:35.907396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.429 [2024-10-09 01:50:36.164799] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.429 [2024-10-09 01:50:36.164868] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.429 [2024-10-09 01:50:36.164884] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.429 [2024-10-09 01:50:36.164901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.429 [2024-10-09 01:50:36.164914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.429 [2024-10-09 01:50:36.166378] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.996 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:16.996 [2024-10-09 01:50:36.799743] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:09:16.996 [2024-10-09 01:50:36.808926] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028b40/0x617000008340) succeed. 00:09:16.996 [2024-10-09 01:50:36.808962] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.255 ************************************ 00:09:17.255 START TEST lvs_grow_clean 00:09:17.255 ************************************ 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.255 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.256 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.256 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.256 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.256 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.256 01:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.514 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:17.514 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:17.514 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:17.514 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:17.514 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:17.772 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.772 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.772 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 lvol 150 00:09:18.030 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bba35e6c-564a-44fa-a19c-cc15ea89c563 00:09:18.030 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.030 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.289 [2024-10-09 01:50:37.853481] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.289 [2024-10-09 01:50:37.853587] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.289 true 00:09:18.289 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:18.289 01:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:18.289 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:18.289 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.547 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bba35e6c-564a-44fa-a19c-cc15ea89c563 00:09:18.805 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:18.805 [2024-10-09 01:50:38.619931] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3137256 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3137256 /var/tmp/bdevperf.sock 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3137256 ']' 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.074 01:50:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.333 [2024-10-09 01:50:38.918560] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:09:19.333 [2024-10-09 01:50:38.918651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137256 ] 00:09:19.333 [2024-10-09 01:50:39.039730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.591 [2024-10-09 01:50:39.235819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.158 01:50:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.158 01:50:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:20.158 01:50:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.158 Nvme0n1 00:09:20.416 01:50:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:20.416 [ 00:09:20.416 { 00:09:20.416 "name": "Nvme0n1", 00:09:20.416 "aliases": [ 00:09:20.416 "bba35e6c-564a-44fa-a19c-cc15ea89c563" 00:09:20.416 ], 00:09:20.416 "product_name": "NVMe disk", 00:09:20.416 "block_size": 4096, 00:09:20.416 "num_blocks": 38912, 00:09:20.416 "uuid": "bba35e6c-564a-44fa-a19c-cc15ea89c563", 00:09:20.416 "numa_id": 0, 00:09:20.416 "assigned_rate_limits": { 00:09:20.416 "rw_ios_per_sec": 0, 00:09:20.416 "rw_mbytes_per_sec": 0, 00:09:20.416 "r_mbytes_per_sec": 0, 00:09:20.416 "w_mbytes_per_sec": 0 00:09:20.416 }, 00:09:20.416 "claimed": false, 00:09:20.416 "zoned": false, 00:09:20.416 "supported_io_types": { 00:09:20.416 "read": true, 00:09:20.416 "write": true, 00:09:20.416 "unmap": true, 00:09:20.416 "flush": true, 00:09:20.416 "reset": true, 00:09:20.416 "nvme_admin": true, 00:09:20.416 "nvme_io": true, 00:09:20.416 "nvme_io_md": false, 00:09:20.416 "write_zeroes": true, 00:09:20.416 "zcopy": false, 00:09:20.416 "get_zone_info": false, 00:09:20.416 "zone_management": false, 00:09:20.416 "zone_append": false, 00:09:20.416 "compare": true, 00:09:20.416 "compare_and_write": true, 00:09:20.416 "abort": true, 00:09:20.416 "seek_hole": false, 00:09:20.416 "seek_data": false, 00:09:20.416 "copy": true, 00:09:20.416 "nvme_iov_md": false 00:09:20.416 }, 00:09:20.416 "memory_domains": [ 00:09:20.416 { 00:09:20.416 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:20.416 "dma_device_type": 0 00:09:20.416 } 00:09:20.416 ], 00:09:20.416 "driver_specific": { 00:09:20.416 "nvme": [ 00:09:20.416 { 00:09:20.416 "trid": { 00:09:20.416 "trtype": "RDMA", 00:09:20.416 "adrfam": "IPv4", 00:09:20.416 "traddr": "192.168.100.8", 00:09:20.416 "trsvcid": "4420", 00:09:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:20.416 }, 00:09:20.416 "ctrlr_data": { 00:09:20.416 "cntlid": 1, 00:09:20.416 "vendor_id": "0x8086", 00:09:20.416 "model_number": "SPDK bdev Controller", 00:09:20.416 "serial_number": "SPDK0", 00:09:20.416 "firmware_revision": "25.01", 00:09:20.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.416 "oacs": { 00:09:20.416 "security": 0, 00:09:20.416 "format": 0, 00:09:20.416 "firmware": 0, 00:09:20.416 "ns_manage": 0 00:09:20.416 }, 00:09:20.416 "multi_ctrlr": true, 00:09:20.416 "ana_reporting": false 00:09:20.416 }, 00:09:20.416 "vs": { 00:09:20.416 "nvme_version": "1.3" 00:09:20.416 }, 00:09:20.416 "ns_data": { 00:09:20.416 "id": 1, 00:09:20.416 "can_share": true 00:09:20.416 } 00:09:20.416 } 00:09:20.416 ], 00:09:20.416 "mp_policy": "active_passive" 00:09:20.416 } 00:09:20.416 } 00:09:20.416 ] 00:09:20.416 01:50:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.416 01:50:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3137436 00:09:20.416 01:50:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:20.674 Running I/O for 10 seconds... 00:09:21.609 Latency(us) 00:09:21.609 [2024-10-08T23:50:41.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.609 Nvme0n1 : 1.00 29120.00 113.75 0.00 0.00 0.00 0.00 0.00 00:09:21.609 [2024-10-08T23:50:41.429Z] =================================================================================================================== 00:09:21.609 [2024-10-08T23:50:41.429Z] Total : 29120.00 113.75 0.00 0.00 0.00 0.00 0.00 00:09:21.609 00:09:22.543 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:22.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.543 Nvme0n1 : 2.00 29760.00 116.25 0.00 0.00 0.00 0.00 0.00 00:09:22.543 [2024-10-08T23:50:42.363Z] =================================================================================================================== 00:09:22.543 [2024-10-08T23:50:42.363Z] Total : 29760.00 116.25 0.00 0.00 0.00 0.00 0.00 00:09:22.543 00:09:22.801 true 00:09:22.801 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:22.801 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:22.801 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:22.801 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:22.801 01:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3137436 00:09:23.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.736 Nvme0n1 : 3.00 29931.33 116.92 0.00 0.00 0.00 0.00 0.00 00:09:23.736 [2024-10-08T23:50:43.556Z] =================================================================================================================== 00:09:23.736 [2024-10-08T23:50:43.556Z] Total : 29931.33 116.92 0.00 0.00 0.00 0.00 0.00 00:09:23.736 00:09:24.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.670 Nvme0n1 : 4.00 30104.25 117.59 0.00 0.00 0.00 0.00 0.00 00:09:24.670 [2024-10-08T23:50:44.490Z] =================================================================================================================== 00:09:24.670 [2024-10-08T23:50:44.490Z] Total : 30104.25 117.59 0.00 0.00 0.00 0.00 0.00 00:09:24.670 00:09:25.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.604 Nvme0n1 : 5.00 30233.00 118.10 0.00 0.00 0.00 0.00 0.00 00:09:25.604 [2024-10-08T23:50:45.424Z] =================================================================================================================== 00:09:25.604 [2024-10-08T23:50:45.424Z] Total : 30233.00 118.10 0.00 0.00 0.00 0.00 0.00 00:09:25.604 00:09:26.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.643 Nvme0n1 : 6.00 30313.83 118.41 0.00 0.00 0.00 0.00 0.00 00:09:26.643 [2024-10-08T23:50:46.463Z] =================================================================================================================== 00:09:26.643 [2024-10-08T23:50:46.463Z] Total : 30313.83 118.41 0.00 0.00 0.00 0.00 0.00 00:09:26.643 00:09:27.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.578 Nvme0n1 : 7.00 30381.43 118.68 0.00 0.00 0.00 0.00 0.00 00:09:27.578 [2024-10-08T23:50:47.398Z] =================================================================================================================== 00:09:27.578 [2024-10-08T23:50:47.398Z] Total : 30381.43 118.68 0.00 0.00 0.00 0.00 0.00 00:09:27.578 00:09:28.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.513 Nvme0n1 : 8.00 30435.62 118.89 0.00 0.00 0.00 0.00 0.00 00:09:28.513 [2024-10-08T23:50:48.333Z] =================================================================================================================== 00:09:28.513 [2024-10-08T23:50:48.333Z] Total : 30435.62 118.89 0.00 0.00 0.00 0.00 0.00 00:09:28.513 00:09:29.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.888 Nvme0n1 : 9.00 30457.44 118.97 0.00 0.00 0.00 0.00 0.00 00:09:29.888 [2024-10-08T23:50:49.708Z] =================================================================================================================== 00:09:29.888 [2024-10-08T23:50:49.708Z] Total : 30457.44 118.97 0.00 0.00 0.00 0.00 0.00 00:09:29.888 00:09:30.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.823 Nvme0n1 : 10.00 30431.70 118.87 0.00 0.00 0.00 0.00 0.00 00:09:30.823 [2024-10-08T23:50:50.643Z] =================================================================================================================== 00:09:30.823 [2024-10-08T23:50:50.643Z] Total : 30431.70 118.87 0.00 0.00 0.00 0.00 0.00 00:09:30.823 00:09:30.823 00:09:30.823 Latency(us) 00:09:30.823 [2024-10-08T23:50:50.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.823 Nvme0n1 : 10.00 30429.57 118.87 0.00 0.00 4202.97 2863.64 21541.40 00:09:30.823 [2024-10-08T23:50:50.643Z] =================================================================================================================== 00:09:30.823 [2024-10-08T23:50:50.643Z] Total : 30429.57 118.87 0.00 0.00 4202.97 2863.64 21541.40 00:09:30.823 { 00:09:30.823 "results": [ 00:09:30.823 { 00:09:30.823 "job": "Nvme0n1", 00:09:30.823 "core_mask": "0x2", 00:09:30.823 "workload": "randwrite", 00:09:30.823 "status": "finished", 00:09:30.823 "queue_depth": 128, 00:09:30.823 "io_size": 4096, 00:09:30.823 "runtime": 10.003822, 00:09:30.823 "iops": 30429.569818415403, 00:09:30.823 "mibps": 118.86550710318517, 00:09:30.823 "io_failed": 0, 00:09:30.823 "io_timeout": 0, 00:09:30.823 "avg_latency_us": 4202.969996840667, 00:09:30.823 "min_latency_us": 2863.6382608695653, 00:09:30.823 "max_latency_us": 21541.398260869566 00:09:30.823 } 00:09:30.823 ], 00:09:30.823 "core_count": 1 00:09:30.823 } 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3137256 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3137256 ']' 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3137256 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3137256 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3137256' 00:09:30.823 killing process with pid 3137256 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3137256 00:09:30.823 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.823 00:09:30.823 Latency(us) 00:09:30.823 [2024-10-08T23:50:50.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.823 [2024-10-08T23:50:50.643Z] =================================================================================================================== 00:09:30.823 [2024-10-08T23:50:50.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.823 01:50:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3137256 00:09:31.758 01:50:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:32.016 01:50:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.275 01:50:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:32.275 01:50:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.275 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.275 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:32.275 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.533 [2024-10-09 01:50:52.194433] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:09:32.533 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:32.792 request: 00:09:32.792 { 00:09:32.792 "uuid": "19b471fc-a32f-432b-89e0-a5c4c30a5a37", 00:09:32.792 "method": "bdev_lvol_get_lvstores", 00:09:32.792 "req_id": 1 00:09:32.792 } 00:09:32.792 Got JSON-RPC error response 00:09:32.792 response: 00:09:32.792 { 00:09:32.792 "code": -19, 00:09:32.792 "message": "No such device" 00:09:32.792 } 00:09:32.792 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:32.792 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:32.792 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:32.792 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:32.792 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.792 aio_bdev 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bba35e6c-564a-44fa-a19c-cc15ea89c563 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=bba35e6c-564a-44fa-a19c-cc15ea89c563 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.050 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bba35e6c-564a-44fa-a19c-cc15ea89c563 -t 2000 00:09:33.308 [ 00:09:33.308 { 00:09:33.308 "name": "bba35e6c-564a-44fa-a19c-cc15ea89c563", 00:09:33.308 "aliases": [ 00:09:33.308 "lvs/lvol" 00:09:33.308 ], 00:09:33.308 "product_name": "Logical Volume", 00:09:33.308 "block_size": 4096, 00:09:33.308 "num_blocks": 38912, 00:09:33.308 "uuid": "bba35e6c-564a-44fa-a19c-cc15ea89c563", 00:09:33.308 "assigned_rate_limits": { 00:09:33.308 "rw_ios_per_sec": 0, 00:09:33.308 "rw_mbytes_per_sec": 0, 00:09:33.308 "r_mbytes_per_sec": 0, 00:09:33.308 "w_mbytes_per_sec": 0 00:09:33.308 }, 00:09:33.308 "claimed": false, 00:09:33.308 "zoned": false, 00:09:33.308 "supported_io_types": { 00:09:33.308 "read": true, 00:09:33.308 "write": true, 00:09:33.308 "unmap": true, 00:09:33.308 "flush": false, 00:09:33.308 "reset": true, 00:09:33.308 "nvme_admin": false, 00:09:33.308 "nvme_io": false, 00:09:33.308 "nvme_io_md": false, 00:09:33.308 "write_zeroes": true, 00:09:33.308 "zcopy": false, 00:09:33.308 "get_zone_info": false, 00:09:33.308 "zone_management": false, 00:09:33.308 "zone_append": false, 00:09:33.308 "compare": false, 00:09:33.308 "compare_and_write": false, 00:09:33.308 "abort": false, 00:09:33.308 "seek_hole": true, 00:09:33.308 "seek_data": true, 00:09:33.308 "copy": false, 00:09:33.308 "nvme_iov_md": false 00:09:33.308 }, 00:09:33.308 "driver_specific": { 00:09:33.308 "lvol": { 00:09:33.308 "lvol_store_uuid": "19b471fc-a32f-432b-89e0-a5c4c30a5a37", 00:09:33.308 "base_bdev": "aio_bdev", 00:09:33.308 "thin_provision": false, 00:09:33.308 "num_allocated_clusters": 38, 00:09:33.308 "snapshot": false, 00:09:33.308 "clone": false, 00:09:33.308 "esnap_clone": false 00:09:33.308 } 00:09:33.308 } 00:09:33.308 } 00:09:33.308 ] 00:09:33.308 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:33.308 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:33.308 01:50:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:33.567 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:33.567 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:33.567 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:33.567 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:33.567 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bba35e6c-564a-44fa-a19c-cc15ea89c563 00:09:33.825 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19b471fc-a32f-432b-89e0-a5c4c30a5a37 00:09:34.083 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.342 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.342 00:09:34.342 real 0m17.096s 00:09:34.342 user 0m16.972s 00:09:34.342 sys 0m1.335s 00:09:34.342 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.342 01:50:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:34.342 ************************************ 00:09:34.342 END TEST lvs_grow_clean 00:09:34.342 ************************************ 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.342 ************************************ 00:09:34.342 START TEST lvs_grow_dirty 00:09:34.342 ************************************ 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.342 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.601 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:34.601 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:34.859 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:34.859 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:34.859 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:34.860 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:34.860 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:34.860 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 lvol 150 00:09:35.118 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:35.118 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.118 01:50:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:35.376 [2024-10-09 01:50:55.037743] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:35.376 [2024-10-09 01:50:55.037836] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:35.376 true 00:09:35.376 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:35.376 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.634 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.634 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.634 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:35.893 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:36.151 [2024-10-09 01:50:55.788036] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.151 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:36.409 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:36.409 01:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3139580 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3139580 /var/tmp/bdevperf.sock 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3139580 ']' 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.409 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.410 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.410 [2024-10-09 01:50:56.066633] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:09:36.410 [2024-10-09 01:50:56.066733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139580 ] 00:09:36.410 [2024-10-09 01:50:56.190731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.668 [2024-10-09 01:50:56.390534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.234 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.234 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:37.234 01:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.492 Nvme0n1 00:09:37.493 01:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:37.751 [ 00:09:37.751 { 00:09:37.751 "name": "Nvme0n1", 00:09:37.751 "aliases": [ 00:09:37.751 "0db5e182-d756-45ac-a1c2-6fee4b653272" 00:09:37.751 ], 00:09:37.751 "product_name": "NVMe disk", 00:09:37.751 "block_size": 4096, 00:09:37.751 "num_blocks": 38912, 00:09:37.751 "uuid": "0db5e182-d756-45ac-a1c2-6fee4b653272", 00:09:37.751 "numa_id": 0, 00:09:37.751 "assigned_rate_limits": { 00:09:37.751 "rw_ios_per_sec": 0, 00:09:37.751 "rw_mbytes_per_sec": 0, 00:09:37.751 "r_mbytes_per_sec": 0, 00:09:37.751 "w_mbytes_per_sec": 0 00:09:37.751 }, 00:09:37.751 "claimed": false, 00:09:37.751 "zoned": false, 00:09:37.751 "supported_io_types": { 00:09:37.751 "read": true, 00:09:37.751 "write": true, 00:09:37.751 "unmap": true, 00:09:37.751 "flush": true, 00:09:37.751 "reset": true, 00:09:37.751 "nvme_admin": true, 00:09:37.751 "nvme_io": true, 00:09:37.751 "nvme_io_md": false, 00:09:37.751 "write_zeroes": true, 00:09:37.751 "zcopy": false, 00:09:37.751 "get_zone_info": false, 00:09:37.751 "zone_management": false, 00:09:37.751 "zone_append": false, 00:09:37.751 "compare": true, 00:09:37.751 "compare_and_write": true, 00:09:37.751 "abort": true, 00:09:37.751 "seek_hole": false, 00:09:37.751 "seek_data": false, 00:09:37.751 "copy": true, 00:09:37.751 "nvme_iov_md": false 00:09:37.751 }, 00:09:37.751 "memory_domains": [ 00:09:37.751 { 00:09:37.751 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:37.751 "dma_device_type": 0 00:09:37.751 } 00:09:37.751 ], 00:09:37.751 "driver_specific": { 00:09:37.751 "nvme": [ 00:09:37.751 { 00:09:37.751 "trid": { 00:09:37.751 "trtype": "RDMA", 00:09:37.751 "adrfam": "IPv4", 00:09:37.751 "traddr": "192.168.100.8", 00:09:37.751 "trsvcid": "4420", 00:09:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:37.751 }, 00:09:37.751 "ctrlr_data": { 00:09:37.751 "cntlid": 1, 00:09:37.751 "vendor_id": "0x8086", 00:09:37.751 "model_number": "SPDK bdev Controller", 00:09:37.751 "serial_number": "SPDK0", 00:09:37.751 "firmware_revision": "25.01", 00:09:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.751 "oacs": { 00:09:37.751 "security": 0, 00:09:37.751 "format": 0, 00:09:37.751 "firmware": 0, 00:09:37.751 "ns_manage": 0 00:09:37.751 }, 00:09:37.751 "multi_ctrlr": true, 00:09:37.751 "ana_reporting": false 00:09:37.751 }, 00:09:37.751 "vs": { 00:09:37.751 "nvme_version": "1.3" 00:09:37.751 }, 00:09:37.751 "ns_data": { 00:09:37.751 "id": 1, 00:09:37.751 "can_share": true 00:09:37.751 } 00:09:37.751 } 00:09:37.751 ], 00:09:37.751 "mp_policy": "active_passive" 00:09:37.751 } 00:09:37.751 } 00:09:37.751 ] 00:09:37.751 01:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3139760 00:09:37.751 01:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:37.751 01:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.751 Running I/O for 10 seconds... 00:09:38.685 Latency(us) 00:09:38.685 [2024-10-08T23:50:58.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.685 Nvme0n1 : 1.00 29762.00 116.26 0.00 0.00 0.00 0.00 0.00 00:09:38.685 [2024-10-08T23:50:58.505Z] =================================================================================================================== 00:09:38.685 [2024-10-08T23:50:58.505Z] Total : 29762.00 116.26 0.00 0.00 0.00 0.00 0.00 00:09:38.685 00:09:39.642 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:39.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.642 Nvme0n1 : 2.00 30033.50 117.32 0.00 0.00 0.00 0.00 0.00 00:09:39.642 [2024-10-08T23:50:59.462Z] =================================================================================================================== 00:09:39.642 [2024-10-08T23:50:59.462Z] Total : 30033.50 117.32 0.00 0.00 0.00 0.00 0.00 00:09:39.642 00:09:39.900 true 00:09:39.900 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:39.900 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.158 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.158 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.158 01:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3139760 00:09:40.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.724 Nvme0n1 : 3.00 30080.33 117.50 0.00 0.00 0.00 0.00 0.00 00:09:40.724 [2024-10-08T23:51:00.544Z] =================================================================================================================== 00:09:40.724 [2024-10-08T23:51:00.544Z] Total : 30080.33 117.50 0.00 0.00 0.00 0.00 0.00 00:09:40.724 00:09:41.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.659 Nvme0n1 : 4.00 30142.75 117.75 0.00 0.00 0.00 0.00 0.00 00:09:41.659 [2024-10-08T23:51:01.479Z] =================================================================================================================== 00:09:41.659 [2024-10-08T23:51:01.479Z] Total : 30142.75 117.75 0.00 0.00 0.00 0.00 0.00 00:09:41.659 00:09:43.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.032 Nvme0n1 : 5.00 30248.00 118.16 0.00 0.00 0.00 0.00 0.00 00:09:43.032 [2024-10-08T23:51:02.852Z] =================================================================================================================== 00:09:43.032 [2024-10-08T23:51:02.852Z] Total : 30248.00 118.16 0.00 0.00 0.00 0.00 0.00 00:09:43.032 00:09:43.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.966 Nvme0n1 : 6.00 30331.83 118.48 0.00 0.00 0.00 0.00 0.00 00:09:43.966 [2024-10-08T23:51:03.786Z] =================================================================================================================== 00:09:43.966 [2024-10-08T23:51:03.786Z] Total : 30331.83 118.48 0.00 0.00 0.00 0.00 0.00 00:09:43.966 00:09:44.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.900 Nvme0n1 : 7.00 30348.71 118.55 0.00 0.00 0.00 0.00 0.00 00:09:44.900 [2024-10-08T23:51:04.720Z] =================================================================================================================== 00:09:44.900 [2024-10-08T23:51:04.720Z] Total : 30348.71 118.55 0.00 0.00 0.00 0.00 0.00 00:09:44.900 00:09:45.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.834 Nvme0n1 : 8.00 30319.25 118.43 0.00 0.00 0.00 0.00 0.00 00:09:45.834 [2024-10-08T23:51:05.654Z] =================================================================================================================== 00:09:45.834 [2024-10-08T23:51:05.654Z] Total : 30319.25 118.43 0.00 0.00 0.00 0.00 0.00 00:09:45.834 00:09:46.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.769 Nvme0n1 : 9.00 30372.33 118.64 0.00 0.00 0.00 0.00 0.00 00:09:46.769 [2024-10-08T23:51:06.589Z] =================================================================================================================== 00:09:46.769 [2024-10-08T23:51:06.589Z] Total : 30372.33 118.64 0.00 0.00 0.00 0.00 0.00 00:09:46.769 00:09:47.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.703 Nvme0n1 : 10.00 30415.30 118.81 0.00 0.00 0.00 0.00 0.00 00:09:47.703 [2024-10-08T23:51:07.523Z] =================================================================================================================== 00:09:47.703 [2024-10-08T23:51:07.523Z] Total : 30415.30 118.81 0.00 0.00 0.00 0.00 0.00 00:09:47.703 00:09:47.703 00:09:47.703 Latency(us) 00:09:47.703 [2024-10-08T23:51:07.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.703 Nvme0n1 : 10.00 30416.20 118.81 0.00 0.00 4204.91 3276.80 17324.30 00:09:47.703 [2024-10-08T23:51:07.523Z] =================================================================================================================== 00:09:47.703 [2024-10-08T23:51:07.523Z] Total : 30416.20 118.81 0.00 0.00 4204.91 3276.80 17324.30 00:09:47.703 { 00:09:47.703 "results": [ 00:09:47.703 { 00:09:47.703 "job": "Nvme0n1", 00:09:47.703 "core_mask": "0x2", 00:09:47.703 "workload": "randwrite", 00:09:47.703 "status": "finished", 00:09:47.703 "queue_depth": 128, 00:09:47.703 "io_size": 4096, 00:09:47.703 "runtime": 10.003911, 00:09:47.703 "iops": 30416.20422252857, 00:09:47.703 "mibps": 118.81329774425222, 00:09:47.703 "io_failed": 0, 00:09:47.703 "io_timeout": 0, 00:09:47.703 "avg_latency_us": 4204.911018113548, 00:09:47.703 "min_latency_us": 3276.8, 00:09:47.703 "max_latency_us": 17324.29913043478 00:09:47.703 } 00:09:47.703 ], 00:09:47.703 "core_count": 1 00:09:47.703 } 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3139580 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3139580 ']' 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3139580 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.703 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3139580 00:09:47.961 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:47.961 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:47.962 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3139580' 00:09:47.962 killing process with pid 3139580 00:09:47.962 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3139580 00:09:47.962 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.962 00:09:47.962 Latency(us) 00:09:47.962 [2024-10-08T23:51:07.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.962 [2024-10-08T23:51:07.782Z] =================================================================================================================== 00:09:47.962 [2024-10-08T23:51:07.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:47.962 01:51:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3139580 00:09:48.896 01:51:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:49.154 01:51:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3136747 00:09:49.413 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3136747 00:09:49.672 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3136747 Killed "${NVMF_APP[@]}" "$@" 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3141789 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3141789 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3141789 ']' 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.672 01:51:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.672 [2024-10-09 01:51:09.343156] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:09:49.672 [2024-10-09 01:51:09.343255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.672 [2024-10-09 01:51:09.473552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.931 [2024-10-09 01:51:09.660475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.931 [2024-10-09 01:51:09.660526] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.931 [2024-10-09 01:51:09.660543] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.931 [2024-10-09 01:51:09.660556] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.931 [2024-10-09 01:51:09.660566] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.931 [2024-10-09 01:51:09.661742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.499 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.757 [2024-10-09 01:51:10.374172] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.757 [2024-10-09 01:51:10.374345] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.757 [2024-10-09 01:51:10.374386] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.757 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.016 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0db5e182-d756-45ac-a1c2-6fee4b653272 -t 2000 00:09:51.016 [ 00:09:51.016 { 00:09:51.016 "name": "0db5e182-d756-45ac-a1c2-6fee4b653272", 00:09:51.016 "aliases": [ 00:09:51.016 "lvs/lvol" 00:09:51.016 ], 00:09:51.016 "product_name": "Logical Volume", 00:09:51.016 "block_size": 4096, 00:09:51.016 "num_blocks": 38912, 00:09:51.016 "uuid": "0db5e182-d756-45ac-a1c2-6fee4b653272", 00:09:51.016 "assigned_rate_limits": { 00:09:51.016 "rw_ios_per_sec": 0, 00:09:51.016 "rw_mbytes_per_sec": 0, 00:09:51.016 "r_mbytes_per_sec": 0, 00:09:51.016 "w_mbytes_per_sec": 0 00:09:51.016 }, 00:09:51.016 "claimed": false, 00:09:51.016 "zoned": false, 00:09:51.016 "supported_io_types": { 00:09:51.016 "read": true, 00:09:51.016 "write": true, 00:09:51.016 "unmap": true, 00:09:51.016 "flush": false, 00:09:51.016 "reset": true, 00:09:51.016 "nvme_admin": false, 00:09:51.016 "nvme_io": false, 00:09:51.016 "nvme_io_md": false, 00:09:51.016 "write_zeroes": true, 00:09:51.016 "zcopy": false, 00:09:51.016 "get_zone_info": false, 00:09:51.016 "zone_management": false, 00:09:51.016 "zone_append": false, 00:09:51.016 "compare": false, 00:09:51.016 "compare_and_write": false, 00:09:51.016 "abort": false, 00:09:51.016 "seek_hole": true, 00:09:51.016 "seek_data": true, 00:09:51.016 "copy": false, 00:09:51.016 "nvme_iov_md": false 00:09:51.016 }, 00:09:51.016 "driver_specific": { 00:09:51.016 "lvol": { 00:09:51.016 "lvol_store_uuid": "67e98dc1-53e2-40bd-9ccf-7b30d21e8f21", 00:09:51.016 "base_bdev": "aio_bdev", 00:09:51.016 "thin_provision": false, 00:09:51.016 "num_allocated_clusters": 38, 00:09:51.016 "snapshot": false, 00:09:51.016 "clone": false, 00:09:51.016 "esnap_clone": false 00:09:51.016 } 00:09:51.016 } 00:09:51.016 } 00:09:51.016 ] 00:09:51.016 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:51.016 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:51.016 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.274 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.274 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:51.274 01:51:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.533 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.533 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.533 [2024-10-09 01:51:11.338392] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:51.792 request: 00:09:51.792 { 00:09:51.792 "uuid": "67e98dc1-53e2-40bd-9ccf-7b30d21e8f21", 00:09:51.792 "method": "bdev_lvol_get_lvstores", 00:09:51.792 "req_id": 1 00:09:51.792 } 00:09:51.792 Got JSON-RPC error response 00:09:51.792 response: 00:09:51.792 { 00:09:51.792 "code": -19, 00:09:51.792 "message": "No such device" 00:09:51.792 } 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.792 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:51.793 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.793 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.051 aio_bdev 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.051 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:52.310 01:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0db5e182-d756-45ac-a1c2-6fee4b653272 -t 2000 00:09:52.310 [ 00:09:52.310 { 00:09:52.310 "name": "0db5e182-d756-45ac-a1c2-6fee4b653272", 00:09:52.310 "aliases": [ 00:09:52.310 "lvs/lvol" 00:09:52.310 ], 00:09:52.310 "product_name": "Logical Volume", 00:09:52.310 "block_size": 4096, 00:09:52.310 "num_blocks": 38912, 00:09:52.310 "uuid": "0db5e182-d756-45ac-a1c2-6fee4b653272", 00:09:52.310 "assigned_rate_limits": { 00:09:52.310 "rw_ios_per_sec": 0, 00:09:52.310 "rw_mbytes_per_sec": 0, 00:09:52.310 "r_mbytes_per_sec": 0, 00:09:52.310 "w_mbytes_per_sec": 0 00:09:52.310 }, 00:09:52.310 "claimed": false, 00:09:52.310 "zoned": false, 00:09:52.310 "supported_io_types": { 00:09:52.310 "read": true, 00:09:52.310 "write": true, 00:09:52.310 "unmap": true, 00:09:52.310 "flush": false, 00:09:52.310 "reset": true, 00:09:52.310 "nvme_admin": false, 00:09:52.310 "nvme_io": false, 00:09:52.310 "nvme_io_md": false, 00:09:52.310 "write_zeroes": true, 00:09:52.310 "zcopy": false, 00:09:52.310 "get_zone_info": false, 00:09:52.310 "zone_management": false, 00:09:52.310 "zone_append": false, 00:09:52.310 "compare": false, 00:09:52.310 "compare_and_write": false, 00:09:52.310 "abort": false, 00:09:52.310 "seek_hole": true, 00:09:52.310 "seek_data": true, 00:09:52.310 "copy": false, 00:09:52.310 "nvme_iov_md": false 00:09:52.310 }, 00:09:52.310 "driver_specific": { 00:09:52.310 "lvol": { 00:09:52.310 "lvol_store_uuid": "67e98dc1-53e2-40bd-9ccf-7b30d21e8f21", 00:09:52.310 "base_bdev": "aio_bdev", 00:09:52.310 "thin_provision": false, 00:09:52.310 "num_allocated_clusters": 38, 00:09:52.310 "snapshot": false, 00:09:52.310 "clone": false, 00:09:52.310 "esnap_clone": false 00:09:52.310 } 00:09:52.310 } 00:09:52.310 } 00:09:52.310 ] 00:09:52.310 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:52.310 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:52.310 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:52.568 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:52.568 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:52.568 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:52.827 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:52.827 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0db5e182-d756-45ac-a1c2-6fee4b653272 00:09:53.085 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67e98dc1-53e2-40bd-9ccf-7b30d21e8f21 00:09:53.085 01:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:53.343 00:09:53.343 real 0m19.042s 00:09:53.343 user 0m49.425s 00:09:53.343 sys 0m3.550s 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.343 ************************************ 00:09:53.343 END TEST lvs_grow_dirty 00:09:53.343 ************************************ 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:53.343 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:53.343 nvmf_trace.0 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:53.602 rmmod nvme_rdma 00:09:53.602 rmmod nvme_fabrics 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3141789 ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3141789 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3141789 ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3141789 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3141789 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3141789' 00:09:53.602 killing process with pid 3141789 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3141789 00:09:53.602 01:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3141789 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:54.977 00:09:54.977 real 0m45.248s 00:09:54.977 user 1m13.740s 00:09:54.977 sys 0m10.311s 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.977 ************************************ 00:09:54.977 END TEST nvmf_lvs_grow 00:09:54.977 ************************************ 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.977 ************************************ 00:09:54.977 START TEST nvmf_bdev_io_wait 00:09:54.977 ************************************ 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.977 * Looking for test storage... 00:09:54.977 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.977 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.978 --rc genhtml_branch_coverage=1 00:09:54.978 --rc genhtml_function_coverage=1 00:09:54.978 --rc genhtml_legend=1 00:09:54.978 --rc geninfo_all_blocks=1 00:09:54.978 --rc geninfo_unexecuted_blocks=1 00:09:54.978 00:09:54.978 ' 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.978 --rc genhtml_branch_coverage=1 00:09:54.978 --rc genhtml_function_coverage=1 00:09:54.978 --rc genhtml_legend=1 00:09:54.978 --rc geninfo_all_blocks=1 00:09:54.978 --rc geninfo_unexecuted_blocks=1 00:09:54.978 00:09:54.978 ' 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.978 --rc genhtml_branch_coverage=1 00:09:54.978 --rc genhtml_function_coverage=1 00:09:54.978 --rc genhtml_legend=1 00:09:54.978 --rc geninfo_all_blocks=1 00:09:54.978 --rc geninfo_unexecuted_blocks=1 00:09:54.978 00:09:54.978 ' 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.978 --rc genhtml_branch_coverage=1 00:09:54.978 --rc genhtml_function_coverage=1 00:09:54.978 --rc genhtml_legend=1 00:09:54.978 --rc geninfo_all_blocks=1 00:09:54.978 --rc geninfo_unexecuted_blocks=1 00:09:54.978 00:09:54.978 ' 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.978 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.237 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.237 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.238 01:51:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.917 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:10:01.918 Found 0000:18:00.0 (0x8086 - 0x159b) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:10:01.918 Found 0000:18:00.1 (0x8086 - 0x159b) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # modinfo irdma 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:10:01.918 Found net devices under 0000:18:00.0: cvl_0_0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:10:01.918 Found net devices under 0000:18:00.1: cvl_0_1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # rdma_device_init 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:01.918 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:01.918 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:10:01.918 altname enp24s0f0np0 00:10:01.918 altname ens785f0np0 00:10:01.918 inet 192.168.100.8/24 scope global cvl_0_0 00:10:01.918 valid_lft forever preferred_lft forever 00:10:01.918 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:10:01.918 valid_lft forever preferred_lft forever 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:01.918 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:01.918 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:10:01.918 altname enp24s0f1np1 00:10:01.918 altname ens785f1np1 00:10:01.918 inet 192.168.100.9/24 scope global cvl_0_1 00:10:01.918 valid_lft forever preferred_lft forever 00:10:01.918 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:10:01.918 valid_lft forever preferred_lft forever 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.918 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:01.919 192.168.100.9' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:01.919 192.168.100.9' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # head -n 1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:01.919 192.168.100.9' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # tail -n +2 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # head -n 1 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3145575 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3145575 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3145575 ']' 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.919 01:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.177 [2024-10-09 01:51:21.777702] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:02.177 [2024-10-09 01:51:21.777823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.177 [2024-10-09 01:51:21.908666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.436 [2024-10-09 01:51:22.106209] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.436 [2024-10-09 01:51:22.106271] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.436 [2024-10-09 01:51:22.106284] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.436 [2024-10-09 01:51:22.106298] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.436 [2024-10-09 01:51:22.106307] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.436 [2024-10-09 01:51:22.108631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.436 [2024-10-09 01:51:22.108698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.436 [2024-10-09 01:51:22.108757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.436 [2024-10-09 01:51:22.108764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.004 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 [2024-10-09 01:51:22.908838] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029440/0x617000007c40) succeed. 00:10:03.264 [2024-10-09 01:51:22.918183] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:10:03.264 [2024-10-09 01:51:22.918220] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.264 01:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 Malloc0 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.264 [2024-10-09 01:51:23.048888] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3145770 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3145772 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.264 { 00:10:03.264 "params": { 00:10:03.264 "name": "Nvme$subsystem", 00:10:03.264 "trtype": "$TEST_TRANSPORT", 00:10:03.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.264 "adrfam": "ipv4", 00:10:03.264 "trsvcid": "$NVMF_PORT", 00:10:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.264 "hdgst": ${hdgst:-false}, 00:10:03.264 "ddgst": ${ddgst:-false} 00:10:03.264 }, 00:10:03.264 "method": "bdev_nvme_attach_controller" 00:10:03.264 } 00:10:03.264 EOF 00:10:03.264 )") 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3145774 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.264 { 00:10:03.264 "params": { 00:10:03.264 "name": "Nvme$subsystem", 00:10:03.264 "trtype": "$TEST_TRANSPORT", 00:10:03.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.264 "adrfam": "ipv4", 00:10:03.264 "trsvcid": "$NVMF_PORT", 00:10:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.264 "hdgst": ${hdgst:-false}, 00:10:03.264 "ddgst": ${ddgst:-false} 00:10:03.264 }, 00:10:03.264 "method": "bdev_nvme_attach_controller" 00:10:03.264 } 00:10:03.264 EOF 00:10:03.264 )") 00:10:03.264 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3145777 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.265 { 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme$subsystem", 00:10:03.265 "trtype": "$TEST_TRANSPORT", 00:10:03.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "$NVMF_PORT", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.265 "hdgst": ${hdgst:-false}, 00:10:03.265 "ddgst": ${ddgst:-false} 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 } 00:10:03.265 EOF 00:10:03.265 )") 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.265 { 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme$subsystem", 00:10:03.265 "trtype": "$TEST_TRANSPORT", 00:10:03.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "$NVMF_PORT", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.265 "hdgst": ${hdgst:-false}, 00:10:03.265 "ddgst": ${ddgst:-false} 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 } 00:10:03.265 EOF 00:10:03.265 )") 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3145770 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme1", 00:10:03.265 "trtype": "rdma", 00:10:03.265 "traddr": "192.168.100.8", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "4420", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.265 "hdgst": false, 00:10:03.265 "ddgst": false 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 }' 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme1", 00:10:03.265 "trtype": "rdma", 00:10:03.265 "traddr": "192.168.100.8", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "4420", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.265 "hdgst": false, 00:10:03.265 "ddgst": false 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 }' 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme1", 00:10:03.265 "trtype": "rdma", 00:10:03.265 "traddr": "192.168.100.8", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "4420", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.265 "hdgst": false, 00:10:03.265 "ddgst": false 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 }' 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:03.265 01:51:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.265 "params": { 00:10:03.265 "name": "Nvme1", 00:10:03.265 "trtype": "rdma", 00:10:03.265 "traddr": "192.168.100.8", 00:10:03.265 "adrfam": "ipv4", 00:10:03.265 "trsvcid": "4420", 00:10:03.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.265 "hdgst": false, 00:10:03.265 "ddgst": false 00:10:03.265 }, 00:10:03.265 "method": "bdev_nvme_attach_controller" 00:10:03.265 }' 00:10:03.525 [2024-10-09 01:51:23.140438] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:03.525 [2024-10-09 01:51:23.140434] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:03.525 [2024-10-09 01:51:23.140437] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:03.525 [2024-10-09 01:51:23.140544] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-09 01:51:23.140545] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.525 --proc-type=auto ] 00:10:03.525 [2024-10-09 01:51:23.140561] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.525 [2024-10-09 01:51:23.145997] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:03.525 [2024-10-09 01:51:23.146093] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:03.784 [2024-10-09 01:51:23.384409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.784 [2024-10-09 01:51:23.485270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.784 [2024-10-09 01:51:23.570942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.044 [2024-10-09 01:51:23.605693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.044 [2024-10-09 01:51:23.670453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.044 [2024-10-09 01:51:23.675765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.044 [2024-10-09 01:51:23.802122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.044 [2024-10-09 01:51:23.848833] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:10:04.303 Running I/O for 1 seconds... 00:10:04.303 Running I/O for 1 seconds... 00:10:04.562 Running I/O for 1 seconds... 00:10:04.562 Running I/O for 1 seconds... 00:10:05.501 17794.00 IOPS, 69.51 MiB/s 00:10:05.501 Latency(us) 00:10:05.501 [2024-10-08T23:51:25.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.501 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.501 Nvme1n1 : 1.01 17815.53 69.59 0.00 0.00 7160.58 5100.41 23934.89 00:10:05.501 [2024-10-08T23:51:25.321Z] =================================================================================================================== 00:10:05.501 [2024-10-08T23:51:25.321Z] Total : 17815.53 69.59 0.00 0.00 7160.58 5100.41 23934.89 00:10:05.501 225400.00 IOPS, 880.47 MiB/s 00:10:05.501 Latency(us) 00:10:05.501 [2024-10-08T23:51:25.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.501 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.501 Nvme1n1 : 1.00 225026.80 879.01 0.00 0.00 566.00 260.01 2607.19 00:10:05.501 [2024-10-08T23:51:25.321Z] =================================================================================================================== 00:10:05.501 [2024-10-08T23:51:25.321Z] Total : 225026.80 879.01 0.00 0.00 566.00 260.01 2607.19 00:10:05.761 14688.00 IOPS, 57.38 MiB/s 00:10:05.761 Latency(us) 00:10:05.761 [2024-10-08T23:51:25.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.761 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.761 Nvme1n1 : 1.01 14735.06 57.56 0.00 0.00 8657.05 4986.43 24618.74 00:10:05.761 [2024-10-08T23:51:25.581Z] =================================================================================================================== 00:10:05.761 [2024-10-08T23:51:25.581Z] Total : 14735.06 57.56 0.00 0.00 8657.05 4986.43 24618.74 00:10:05.761 16641.00 IOPS, 65.00 MiB/s 00:10:05.761 Latency(us) 00:10:05.761 [2024-10-08T23:51:25.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.761 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.761 Nvme1n1 : 1.01 16719.12 65.31 0.00 0.00 7634.84 3419.27 24390.79 00:10:05.761 [2024-10-08T23:51:25.581Z] =================================================================================================================== 00:10:05.761 [2024-10-08T23:51:25.581Z] Total : 16719.12 65.31 0.00 0.00 7634.84 3419.27 24390.79 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3145772 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3145774 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3145777 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.139 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:07.140 rmmod nvme_rdma 00:10:07.140 rmmod nvme_fabrics 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3145575 ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3145575 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3145575 ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3145575 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3145575 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3145575' 00:10:07.140 killing process with pid 3145575 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3145575 00:10:07.140 01:51:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3145575 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:08.520 00:10:08.520 real 0m13.335s 00:10:08.520 user 0m34.756s 00:10:08.520 sys 0m7.503s 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.520 ************************************ 00:10:08.520 END TEST nvmf_bdev_io_wait 00:10:08.520 ************************************ 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.520 ************************************ 00:10:08.520 START TEST nvmf_queue_depth 00:10:08.520 ************************************ 00:10:08.520 01:51:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:08.520 * Looking for test storage... 00:10:08.520 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.520 --rc genhtml_branch_coverage=1 00:10:08.520 --rc genhtml_function_coverage=1 00:10:08.520 --rc genhtml_legend=1 00:10:08.520 --rc geninfo_all_blocks=1 00:10:08.520 --rc geninfo_unexecuted_blocks=1 00:10:08.520 00:10:08.520 ' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.520 --rc genhtml_branch_coverage=1 00:10:08.520 --rc genhtml_function_coverage=1 00:10:08.520 --rc genhtml_legend=1 00:10:08.520 --rc geninfo_all_blocks=1 00:10:08.520 --rc geninfo_unexecuted_blocks=1 00:10:08.520 00:10:08.520 ' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.520 --rc genhtml_branch_coverage=1 00:10:08.520 --rc genhtml_function_coverage=1 00:10:08.520 --rc genhtml_legend=1 00:10:08.520 --rc geninfo_all_blocks=1 00:10:08.520 --rc geninfo_unexecuted_blocks=1 00:10:08.520 00:10:08.520 ' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.520 --rc genhtml_branch_coverage=1 00:10:08.520 --rc genhtml_function_coverage=1 00:10:08.520 --rc genhtml_legend=1 00:10:08.520 --rc geninfo_all_blocks=1 00:10:08.520 --rc geninfo_unexecuted_blocks=1 00:10:08.520 00:10:08.520 ' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:08.520 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.521 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.521 01:51:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.649 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:10:16.650 Found 0000:18:00.0 (0x8086 - 0x159b) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:10:16.650 Found 0000:18:00.1 (0x8086 - 0x159b) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@403 -- # modinfo irdma 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:10:16.650 Found net devices under 0000:18:00.0: cvl_0_0 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:10:16.650 Found net devices under 0000:18:00.1: cvl_0_1 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # rdma_device_init 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:16.650 01:51:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:16.650 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:16.650 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:10:16.650 altname enp24s0f0np0 00:10:16.650 altname ens785f0np0 00:10:16.650 inet 192.168.100.8/24 scope global cvl_0_0 00:10:16.650 valid_lft forever preferred_lft forever 00:10:16.650 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:10:16.650 valid_lft forever preferred_lft forever 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:16.650 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:16.650 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:10:16.650 altname enp24s0f1np1 00:10:16.650 altname ens785f1np1 00:10:16.650 inet 192.168.100.9/24 scope global cvl_0_1 00:10:16.650 valid_lft forever preferred_lft forever 00:10:16.650 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:10:16.650 valid_lft forever preferred_lft forever 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.650 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:16.651 192.168.100.9' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:16.651 192.168.100.9' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # head -n 1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:16.651 192.168.100.9' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # head -n 1 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # tail -n +2 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3149422 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3149422 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3149422 ']' 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.651 01:51:35 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 [2024-10-09 01:51:35.304135] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:16.651 [2024-10-09 01:51:35.304255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.651 [2024-10-09 01:51:35.433709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.651 [2024-10-09 01:51:35.620294] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.651 [2024-10-09 01:51:35.620356] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.651 [2024-10-09 01:51:35.620369] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.651 [2024-10-09 01:51:35.620382] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.651 [2024-10-09 01:51:35.620392] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.651 [2024-10-09 01:51:35.621737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 [2024-10-09 01:51:36.181629] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000289c0/0x617000007c40) succeed. 00:10:16.651 [2024-10-09 01:51:36.190665] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028b40/0x617000007fc0) succeed. 00:10:16.651 [2024-10-09 01:51:36.190701] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 Malloc0 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 [2024-10-09 01:51:36.288863] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3149611 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3149611 /var/tmp/bdevperf.sock 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3149611 ']' 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:16.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:16.651 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.652 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.652 01:51:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:16.652 [2024-10-09 01:51:36.378241] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:16.652 [2024-10-09 01:51:36.378359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149611 ] 00:10:16.911 [2024-10-09 01:51:36.506757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.911 [2024-10-09 01:51:36.703842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.479 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.479 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:17.480 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:17.480 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.480 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:17.480 NVMe0n1 00:10:17.480 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.480 01:51:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:17.738 Running I/O for 10 seconds... 00:10:19.614 14336.00 IOPS, 56.00 MiB/s [2024-10-08T23:51:40.814Z] 14848.00 IOPS, 58.00 MiB/s [2024-10-08T23:51:41.751Z] 14974.33 IOPS, 58.49 MiB/s [2024-10-08T23:51:42.689Z] 14984.75 IOPS, 58.53 MiB/s [2024-10-08T23:51:43.627Z] 15014.20 IOPS, 58.65 MiB/s [2024-10-08T23:51:44.565Z] 15018.67 IOPS, 58.67 MiB/s [2024-10-08T23:51:45.619Z] 15067.43 IOPS, 58.86 MiB/s [2024-10-08T23:51:46.556Z] 15088.12 IOPS, 58.94 MiB/s [2024-10-08T23:51:47.497Z] 15092.56 IOPS, 58.96 MiB/s [2024-10-08T23:51:47.497Z] 15095.30 IOPS, 58.97 MiB/s 00:10:27.677 Latency(us) 00:10:27.677 [2024-10-08T23:51:47.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.677 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:27.677 Verification LBA range: start 0x0 length 0x4000 00:10:27.677 NVMe0n1 : 10.04 15126.81 59.09 0.00 0.00 67475.34 6924.02 44222.55 00:10:27.677 [2024-10-08T23:51:47.497Z] =================================================================================================================== 00:10:27.677 [2024-10-08T23:51:47.497Z] Total : 15126.81 59.09 0.00 0.00 67475.34 6924.02 44222.55 00:10:27.677 { 00:10:27.677 "results": [ 00:10:27.677 { 00:10:27.677 "job": "NVMe0n1", 00:10:27.677 "core_mask": "0x1", 00:10:27.677 "workload": "verify", 00:10:27.677 "status": "finished", 00:10:27.677 "verify_range": { 00:10:27.677 "start": 0, 00:10:27.677 "length": 16384 00:10:27.677 }, 00:10:27.677 "queue_depth": 1024, 00:10:27.677 "io_size": 4096, 00:10:27.677 "runtime": 10.036353, 00:10:27.677 "iops": 15126.809509390512, 00:10:27.677 "mibps": 59.08909964605669, 00:10:27.677 "io_failed": 0, 00:10:27.677 "io_timeout": 0, 00:10:27.677 "avg_latency_us": 67475.34112641739, 00:10:27.677 "min_latency_us": 6924.020869565217, 00:10:27.677 "max_latency_us": 44222.55304347826 00:10:27.677 } 00:10:27.677 ], 00:10:27.677 "core_count": 1 00:10:27.677 } 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3149611 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3149611 ']' 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3149611 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.677 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149611 00:10:27.936 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.936 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.936 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149611' 00:10:27.936 killing process with pid 3149611 00:10:27.936 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3149611 00:10:27.936 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.936 00:10:27.936 Latency(us) 00:10:27.936 [2024-10-08T23:51:47.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.936 [2024-10-08T23:51:47.756Z] =================================================================================================================== 00:10:27.936 [2024-10-08T23:51:47.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.936 01:51:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3149611 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:28.875 rmmod nvme_rdma 00:10:28.875 rmmod nvme_fabrics 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3149422 ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3149422 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3149422 ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3149422 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149422 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149422' 00:10:28.875 killing process with pid 3149422 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3149422 00:10:28.875 01:51:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3149422 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:30.802 00:10:30.802 real 0m22.105s 00:10:30.802 user 0m29.167s 00:10:30.802 sys 0m6.399s 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.802 ************************************ 00:10:30.802 END TEST nvmf_queue_depth 00:10:30.802 ************************************ 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.802 ************************************ 00:10:30.802 START TEST nvmf_target_multipath 00:10:30.802 ************************************ 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:30.802 * Looking for test storage... 00:10:30.802 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.802 --rc genhtml_branch_coverage=1 00:10:30.802 --rc genhtml_function_coverage=1 00:10:30.802 --rc genhtml_legend=1 00:10:30.802 --rc geninfo_all_blocks=1 00:10:30.802 --rc geninfo_unexecuted_blocks=1 00:10:30.802 00:10:30.802 ' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.802 --rc genhtml_branch_coverage=1 00:10:30.802 --rc genhtml_function_coverage=1 00:10:30.802 --rc genhtml_legend=1 00:10:30.802 --rc geninfo_all_blocks=1 00:10:30.802 --rc geninfo_unexecuted_blocks=1 00:10:30.802 00:10:30.802 ' 00:10:30.802 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.802 --rc genhtml_branch_coverage=1 00:10:30.802 --rc genhtml_function_coverage=1 00:10:30.803 --rc genhtml_legend=1 00:10:30.803 --rc geninfo_all_blocks=1 00:10:30.803 --rc geninfo_unexecuted_blocks=1 00:10:30.803 00:10:30.803 ' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.803 --rc genhtml_branch_coverage=1 00:10:30.803 --rc genhtml_function_coverage=1 00:10:30.803 --rc genhtml_legend=1 00:10:30.803 --rc geninfo_all_blocks=1 00:10:30.803 --rc geninfo_unexecuted_blocks=1 00:10:30.803 00:10:30.803 ' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.803 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.803 01:51:50 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.377 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:10:37.378 Found 0000:18:00.0 (0x8086 - 0x159b) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:10:37.378 Found 0000:18:00.1 (0x8086 - 0x159b) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@403 -- # modinfo irdma 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:10:37.378 Found net devices under 0000:18:00.0: cvl_0_0 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:10:37.378 Found net devices under 0000:18:00.1: cvl_0_1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # rdma_device_init 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:37.378 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:37.378 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:10:37.378 altname enp24s0f0np0 00:10:37.378 altname ens785f0np0 00:10:37.378 inet 192.168.100.8/24 scope global cvl_0_0 00:10:37.378 valid_lft forever preferred_lft forever 00:10:37.378 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:10:37.378 valid_lft forever preferred_lft forever 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:37.378 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:37.378 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:10:37.378 altname enp24s0f1np1 00:10:37.378 altname ens785f1np1 00:10:37.378 inet 192.168.100.9/24 scope global cvl_0_1 00:10:37.378 valid_lft forever preferred_lft forever 00:10:37.378 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:10:37.378 valid_lft forever preferred_lft forever 00:10:37.378 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:37.379 192.168.100.9' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:37.379 192.168.100.9' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # head -n 1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:37.379 192.168.100.9' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # head -n 1 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # tail -n +2 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:37.379 run this test only with TCP transport for now 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:37.379 rmmod nvme_rdma 00:10:37.379 rmmod nvme_fabrics 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:37.379 00:10:37.379 real 0m6.773s 00:10:37.379 user 0m1.955s 00:10:37.379 sys 0m5.011s 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.379 ************************************ 00:10:37.379 END TEST nvmf_target_multipath 00:10:37.379 ************************************ 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.379 01:51:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.379 ************************************ 00:10:37.379 START TEST nvmf_zcopy 00:10:37.379 ************************************ 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:37.379 * Looking for test storage... 00:10:37.379 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:37.379 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:37.639 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.639 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:37.639 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.639 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:37.639 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.640 --rc genhtml_branch_coverage=1 00:10:37.640 --rc genhtml_function_coverage=1 00:10:37.640 --rc genhtml_legend=1 00:10:37.640 --rc geninfo_all_blocks=1 00:10:37.640 --rc geninfo_unexecuted_blocks=1 00:10:37.640 00:10:37.640 ' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.640 --rc genhtml_branch_coverage=1 00:10:37.640 --rc genhtml_function_coverage=1 00:10:37.640 --rc genhtml_legend=1 00:10:37.640 --rc geninfo_all_blocks=1 00:10:37.640 --rc geninfo_unexecuted_blocks=1 00:10:37.640 00:10:37.640 ' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.640 --rc genhtml_branch_coverage=1 00:10:37.640 --rc genhtml_function_coverage=1 00:10:37.640 --rc genhtml_legend=1 00:10:37.640 --rc geninfo_all_blocks=1 00:10:37.640 --rc geninfo_unexecuted_blocks=1 00:10:37.640 00:10:37.640 ' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.640 --rc genhtml_branch_coverage=1 00:10:37.640 --rc genhtml_function_coverage=1 00:10:37.640 --rc genhtml_legend=1 00:10:37.640 --rc geninfo_all_blocks=1 00:10:37.640 --rc geninfo_unexecuted_blocks=1 00:10:37.640 00:10:37.640 ' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.640 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.640 01:51:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:10:44.202 Found 0000:18:00.0 (0x8086 - 0x159b) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:10:44.202 Found 0000:18:00.1 (0x8086 - 0x159b) 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.202 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@403 -- # modinfo irdma 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:10:44.203 Found net devices under 0000:18:00.0: cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:10:44.203 Found net devices under 0000:18:00.1: cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # rdma_device_init 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:44.203 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:44.203 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:10:44.203 altname enp24s0f0np0 00:10:44.203 altname ens785f0np0 00:10:44.203 inet 192.168.100.8/24 scope global cvl_0_0 00:10:44.203 valid_lft forever preferred_lft forever 00:10:44.203 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:10:44.203 valid_lft forever preferred_lft forever 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:44.203 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:44.203 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:10:44.203 altname enp24s0f1np1 00:10:44.203 altname ens785f1np1 00:10:44.203 inet 192.168.100.9/24 scope global cvl_0_1 00:10:44.203 valid_lft forever preferred_lft forever 00:10:44.203 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:10:44.203 valid_lft forever preferred_lft forever 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.203 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:44.204 192.168.100.9' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:44.204 192.168.100.9' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # head -n 1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:44.204 192.168.100.9' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # tail -n +2 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # head -n 1 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3157206 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3157206 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3157206 ']' 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.204 01:52:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.204 [2024-10-09 01:52:03.712285] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:44.204 [2024-10-09 01:52:03.712394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.204 [2024-10-09 01:52:03.841672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.463 [2024-10-09 01:52:04.032178] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.463 [2024-10-09 01:52:04.032230] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.463 [2024-10-09 01:52:04.032242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.463 [2024-10-09 01:52:04.032255] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.463 [2024-10-09 01:52:04.032264] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.463 [2024-10-09 01:52:04.033454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.722 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.722 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:44.722 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:44.722 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.722 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:44.981 Unsupported transport: rdma 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.981 nvmf_trace.0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:44.981 rmmod nvme_rdma 00:10:44.981 rmmod nvme_fabrics 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3157206 ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3157206 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3157206 ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3157206 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3157206 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3157206' 00:10:44.981 killing process with pid 3157206 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3157206 00:10:44.981 01:52:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3157206 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:46.359 00:10:46.359 real 0m8.872s 00:10:46.359 user 0m4.274s 00:10:46.359 sys 0m5.378s 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.359 ************************************ 00:10:46.359 END TEST nvmf_zcopy 00:10:46.359 ************************************ 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.359 ************************************ 00:10:46.359 START TEST nvmf_nmic 00:10:46.359 ************************************ 00:10:46.359 01:52:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:46.359 * Looking for test storage... 00:10:46.359 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.359 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.617 --rc genhtml_branch_coverage=1 00:10:46.617 --rc genhtml_function_coverage=1 00:10:46.617 --rc genhtml_legend=1 00:10:46.617 --rc geninfo_all_blocks=1 00:10:46.617 --rc geninfo_unexecuted_blocks=1 00:10:46.617 00:10:46.617 ' 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.617 --rc genhtml_branch_coverage=1 00:10:46.617 --rc genhtml_function_coverage=1 00:10:46.617 --rc genhtml_legend=1 00:10:46.617 --rc geninfo_all_blocks=1 00:10:46.617 --rc geninfo_unexecuted_blocks=1 00:10:46.617 00:10:46.617 ' 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.617 --rc genhtml_branch_coverage=1 00:10:46.617 --rc genhtml_function_coverage=1 00:10:46.617 --rc genhtml_legend=1 00:10:46.617 --rc geninfo_all_blocks=1 00:10:46.617 --rc geninfo_unexecuted_blocks=1 00:10:46.617 00:10:46.617 ' 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.617 --rc genhtml_branch_coverage=1 00:10:46.617 --rc genhtml_function_coverage=1 00:10:46.617 --rc genhtml_legend=1 00:10:46.617 --rc geninfo_all_blocks=1 00:10:46.617 --rc geninfo_unexecuted_blocks=1 00:10:46.617 00:10:46.617 ' 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.617 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.618 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.618 01:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:10:53.180 Found 0000:18:00.0 (0x8086 - 0x159b) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:10:53.180 Found 0000:18:00.1 (0x8086 - 0x159b) 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.180 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@403 -- # modinfo irdma 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:10:53.181 Found net devices under 0000:18:00.0: cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:10:53.181 Found net devices under 0000:18:00.1: cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # rdma_device_init 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:53.181 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:53.181 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:10:53.181 altname enp24s0f0np0 00:10:53.181 altname ens785f0np0 00:10:53.181 inet 192.168.100.8/24 scope global cvl_0_0 00:10:53.181 valid_lft forever preferred_lft forever 00:10:53.181 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:10:53.181 valid_lft forever preferred_lft forever 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:53.181 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:53.181 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:10:53.181 altname enp24s0f1np1 00:10:53.181 altname ens785f1np1 00:10:53.181 inet 192.168.100.9/24 scope global cvl_0_1 00:10:53.181 valid_lft forever preferred_lft forever 00:10:53.181 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:10:53.181 valid_lft forever preferred_lft forever 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:53.181 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.182 192.168.100.9' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:53.182 192.168.100.9' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # head -n 1 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:53.182 192.168.100.9' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # tail -n +2 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # head -n 1 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.182 01:52:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3160494 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3160494 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3160494 ']' 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.441 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.441 [2024-10-09 01:52:13.088380] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:10:53.441 [2024-10-09 01:52:13.088504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.441 [2024-10-09 01:52:13.218452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.698 [2024-10-09 01:52:13.413162] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.698 [2024-10-09 01:52:13.413226] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.698 [2024-10-09 01:52:13.413239] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.698 [2024-10-09 01:52:13.413252] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.698 [2024-10-09 01:52:13.413263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.698 [2024-10-09 01:52:13.415675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.698 [2024-10-09 01:52:13.415730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.698 [2024-10-09 01:52:13.415786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.698 [2024-10-09 01:52:13.415794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.263 01:52:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.263 [2024-10-09 01:52:13.996178] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:10:54.263 [2024-10-09 01:52:14.005861] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:10:54.263 [2024-10-09 01:52:14.005897] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:10:54.263 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.263 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.263 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.263 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 Malloc0 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 [2024-10-09 01:52:14.115664] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:54.522 test case1: single bdev can't be used in multiple subsystems 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 [2024-10-09 01:52:14.147670] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:54.522 [2024-10-09 01:52:14.147709] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:54.522 [2024-10-09 01:52:14.147728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.522 request: 00:10:54.522 { 00:10:54.522 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.522 "namespace": { 00:10:54.522 "bdev_name": "Malloc0", 00:10:54.522 "no_auto_visible": false 00:10:54.522 }, 00:10:54.522 "method": "nvmf_subsystem_add_ns", 00:10:54.522 "req_id": 1 00:10:54.522 } 00:10:54.522 Got JSON-RPC error response 00:10:54.522 response: 00:10:54.522 { 00:10:54.522 "code": -32602, 00:10:54.522 "message": "Invalid parameters" 00:10:54.522 } 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:54.522 Adding namespace failed - expected result. 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:54.522 test case2: host connect to nvmf target in multiple paths 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.522 [2024-10-09 01:52:14.163727] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.522 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:54.780 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:55.037 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.037 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.037 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.037 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.037 01:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:56.935 01:52:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.935 [global] 00:10:56.935 thread=1 00:10:56.935 invalidate=1 00:10:56.935 rw=write 00:10:56.935 time_based=1 00:10:56.935 runtime=1 00:10:56.935 ioengine=libaio 00:10:56.935 direct=1 00:10:56.935 bs=4096 00:10:56.935 iodepth=1 00:10:56.935 norandommap=0 00:10:56.935 numjobs=1 00:10:56.935 00:10:56.935 verify_dump=1 00:10:56.935 verify_backlog=512 00:10:56.935 verify_state_save=0 00:10:56.935 do_verify=1 00:10:56.935 verify=crc32c-intel 00:10:56.935 [job0] 00:10:56.935 filename=/dev/nvme0n1 00:10:56.935 Could not set queue depth (nvme0n1) 00:10:57.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.192 fio-3.35 00:10:57.192 Starting 1 thread 00:10:58.563 00:10:58.563 job0: (groupid=0, jobs=1): err= 0: pid=3161148: Wed Oct 9 01:52:18 2024 00:10:58.563 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:58.563 slat (nsec): min=8503, max=28293, avg=9039.08, stdev=832.57 00:10:58.563 clat (usec): min=62, max=104, avg=82.36, stdev= 4.23 00:10:58.563 lat (usec): min=81, max=114, avg=91.40, stdev= 4.29 00:10:58.563 clat percentiles (usec): 00:10:58.563 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:10:58.563 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 84], 00:10:58.563 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 90], 00:10:58.563 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 100], 00:10:58.563 | 99.99th=[ 105] 00:10:58.563 write: IOPS=5445, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1001msec); 0 zone resets 00:10:58.563 slat (nsec): min=10855, max=51953, avg=11777.24, stdev=1307.93 00:10:58.563 clat (usec): min=67, max=178, avg=80.43, stdev= 5.11 00:10:58.563 lat (usec): min=81, max=228, avg=92.21, stdev= 5.53 00:10:58.563 clat percentiles (usec): 00:10:58.563 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:10:58.563 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:10:58.563 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 89], 00:10:58.563 | 99.00th=[ 93], 99.50th=[ 95], 99.90th=[ 122], 99.95th=[ 167], 00:10:58.563 | 99.99th=[ 180] 00:10:58.563 bw ( KiB/s): min=22032, max=22032, per=100.00%, avg=22032.00, stdev= 0.00, samples=1 00:10:58.563 iops : min= 5508, max= 5508, avg=5508.00, stdev= 0.00, samples=1 00:10:58.563 lat (usec) : 100=99.87%, 250=0.13% 00:10:58.563 cpu : usr=7.70%, sys=10.80%, ctx=10571, majf=0, minf=1 00:10:58.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.563 issued rwts: total=5120,5451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.563 00:10:58.563 Run status group 0 (all jobs): 00:10:58.563 READ: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:10:58.563 WRITE: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=21.3MiB (22.3MB), run=1001-1001msec 00:10:58.563 00:10:58.563 Disk stats (read/write): 00:10:58.563 nvme0n1: ios=4658/4892, merge=0/0, ticks=355/364, in_queue=719, util=90.78% 00:10:58.563 01:52:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:00.462 rmmod nvme_rdma 00:11:00.462 rmmod nvme_fabrics 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3160494 ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3160494 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3160494 ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3160494 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160494 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160494' 00:11:00.462 killing process with pid 3160494 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3160494 00:11:00.462 01:52:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3160494 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:01.835 00:11:01.835 real 0m15.468s 00:11:01.835 user 0m35.738s 00:11:01.835 sys 0m6.227s 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.835 ************************************ 00:11:01.835 END TEST nvmf_nmic 00:11:01.835 ************************************ 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.835 ************************************ 00:11:01.835 START TEST nvmf_fio_target 00:11:01.835 ************************************ 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:01.835 * Looking for test storage... 00:11:01.835 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.835 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.093 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.093 --rc genhtml_branch_coverage=1 00:11:02.093 --rc genhtml_function_coverage=1 00:11:02.093 --rc genhtml_legend=1 00:11:02.093 --rc geninfo_all_blocks=1 00:11:02.094 --rc geninfo_unexecuted_blocks=1 00:11:02.094 00:11:02.094 ' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.094 --rc genhtml_branch_coverage=1 00:11:02.094 --rc genhtml_function_coverage=1 00:11:02.094 --rc genhtml_legend=1 00:11:02.094 --rc geninfo_all_blocks=1 00:11:02.094 --rc geninfo_unexecuted_blocks=1 00:11:02.094 00:11:02.094 ' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.094 --rc genhtml_branch_coverage=1 00:11:02.094 --rc genhtml_function_coverage=1 00:11:02.094 --rc genhtml_legend=1 00:11:02.094 --rc geninfo_all_blocks=1 00:11:02.094 --rc geninfo_unexecuted_blocks=1 00:11:02.094 00:11:02.094 ' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.094 --rc genhtml_branch_coverage=1 00:11:02.094 --rc genhtml_function_coverage=1 00:11:02.094 --rc genhtml_legend=1 00:11:02.094 --rc geninfo_all_blocks=1 00:11:02.094 --rc geninfo_unexecuted_blocks=1 00:11:02.094 00:11:02.094 ' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.094 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.094 01:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:11:08.660 Found 0000:18:00.0 (0x8086 - 0x159b) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:11:08.660 Found 0000:18:00.1 (0x8086 - 0x159b) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@403 -- # modinfo irdma 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:11:08.660 Found net devices under 0000:18:00.0: cvl_0_0 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:11:08.660 Found net devices under 0000:18:00.1: cvl_0_1 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # rdma_device_init 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:08.660 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:08.661 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:08.661 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:11:08.661 altname enp24s0f0np0 00:11:08.661 altname ens785f0np0 00:11:08.661 inet 192.168.100.8/24 scope global cvl_0_0 00:11:08.661 valid_lft forever preferred_lft forever 00:11:08.661 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:11:08.661 valid_lft forever preferred_lft forever 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:08.661 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:08.661 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:11:08.661 altname enp24s0f1np1 00:11:08.661 altname ens785f1np1 00:11:08.661 inet 192.168.100.9/24 scope global cvl_0_1 00:11:08.661 valid_lft forever preferred_lft forever 00:11:08.661 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:11:08.661 valid_lft forever preferred_lft forever 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.661 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.920 192.168.100.9' 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:08.920 192.168.100.9' 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # head -n 1 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:08.920 192.168.100.9' 00:11:08.920 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # tail -n +2 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # head -n 1 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3164777 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3164777 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3164777 ']' 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.921 01:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-10-09 01:52:28.676528] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:11:08.921 [2024-10-09 01:52:28.676648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.180 [2024-10-09 01:52:28.807958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.438 [2024-10-09 01:52:29.008968] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.438 [2024-10-09 01:52:29.009025] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.438 [2024-10-09 01:52:29.009039] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.438 [2024-10-09 01:52:29.009053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.438 [2024-10-09 01:52:29.009063] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.438 [2024-10-09 01:52:29.011438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.438 [2024-10-09 01:52:29.011510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.438 [2024-10-09 01:52:29.011577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.438 [2024-10-09 01:52:29.011583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.696 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.696 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:09.696 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:09.696 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.696 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.954 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.954 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:09.954 [2024-10-09 01:52:29.735911] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:09.954 [2024-10-09 01:52:29.745689] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:09.954 [2024-10-09 01:52:29.745725] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:10.213 01:52:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.472 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:10.472 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.732 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:10.732 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.991 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:10.991 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.250 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:11.250 01:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:11.510 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.769 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:11.769 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.028 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:12.028 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.287 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:12.287 01:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:12.546 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.546 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.546 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.804 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.804 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.063 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.321 [2024-10-09 01:52:32.920909] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:13.321 01:52:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:13.579 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:13.579 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:13.838 01:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:16.370 01:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:16.370 [global] 00:11:16.370 thread=1 00:11:16.370 invalidate=1 00:11:16.370 rw=write 00:11:16.370 time_based=1 00:11:16.370 runtime=1 00:11:16.370 ioengine=libaio 00:11:16.370 direct=1 00:11:16.370 bs=4096 00:11:16.370 iodepth=1 00:11:16.370 norandommap=0 00:11:16.370 numjobs=1 00:11:16.370 00:11:16.370 verify_dump=1 00:11:16.370 verify_backlog=512 00:11:16.370 verify_state_save=0 00:11:16.370 do_verify=1 00:11:16.370 verify=crc32c-intel 00:11:16.370 [job0] 00:11:16.370 filename=/dev/nvme0n1 00:11:16.370 [job1] 00:11:16.370 filename=/dev/nvme0n2 00:11:16.370 [job2] 00:11:16.370 filename=/dev/nvme0n3 00:11:16.370 [job3] 00:11:16.370 filename=/dev/nvme0n4 00:11:16.370 Could not set queue depth (nvme0n1) 00:11:16.370 Could not set queue depth (nvme0n2) 00:11:16.370 Could not set queue depth (nvme0n3) 00:11:16.370 Could not set queue depth (nvme0n4) 00:11:16.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.370 fio-3.35 00:11:16.370 Starting 4 threads 00:11:17.747 00:11:17.747 job0: (groupid=0, jobs=1): err= 0: pid=3165839: Wed Oct 9 01:52:37 2024 00:11:17.747 read: IOPS=4044, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1001msec) 00:11:17.747 slat (nsec): min=8528, max=28683, avg=9187.91, stdev=1060.86 00:11:17.747 clat (usec): min=90, max=327, avg=106.19, stdev= 6.99 00:11:17.747 lat (usec): min=99, max=336, avg=115.38, stdev= 7.07 00:11:17.747 clat percentiles (usec): 00:11:17.747 | 1.00th=[ 95], 5.00th=[ 98], 10.00th=[ 99], 20.00th=[ 101], 00:11:17.747 | 30.00th=[ 103], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:11:17.747 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 118], 00:11:17.747 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 135], 99.95th=[ 137], 00:11:17.747 | 99.99th=[ 326] 00:11:17.747 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:17.747 slat (nsec): min=7618, max=54888, avg=11920.69, stdev=1413.51 00:11:17.747 clat (usec): min=87, max=403, avg=112.56, stdev=29.62 00:11:17.747 lat (usec): min=99, max=415, avg=124.48, stdev=29.79 00:11:17.747 clat percentiles (usec): 00:11:17.747 | 1.00th=[ 92], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:11:17.747 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:11:17.747 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 182], 95.00th=[ 196], 00:11:17.747 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 225], 99.95th=[ 251], 00:11:17.747 | 99.99th=[ 404] 00:11:17.747 bw ( KiB/s): min=16384, max=16384, per=33.74%, avg=16384.00, stdev= 0.00, samples=1 00:11:17.747 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:17.747 lat (usec) : 100=23.33%, 250=76.64%, 500=0.04% 00:11:17.747 cpu : usr=6.40%, sys=8.20%, ctx=8145, majf=0, minf=2 00:11:17.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.747 issued rwts: total=4049,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.747 job1: (groupid=0, jobs=1): err= 0: pid=3165840: Wed Oct 9 01:52:37 2024 00:11:17.747 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:17.747 slat (nsec): min=8374, max=44598, avg=9302.42, stdev=1411.25 00:11:17.747 clat (usec): min=84, max=275, avg=175.92, stdev=34.23 00:11:17.747 lat (usec): min=93, max=285, avg=185.23, stdev=34.28 00:11:17.747 clat percentiles (usec): 00:11:17.747 | 1.00th=[ 90], 5.00th=[ 96], 10.00th=[ 108], 20.00th=[ 167], 00:11:17.747 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:11:17.747 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 215], 00:11:17.747 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 269], 00:11:17.747 | 99.99th=[ 277] 00:11:17.747 write: IOPS=2862, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:11:17.747 slat (nsec): min=10366, max=50049, avg=11581.08, stdev=1503.07 00:11:17.747 clat (usec): min=82, max=274, avg=167.60, stdev=40.26 00:11:17.747 lat (usec): min=93, max=285, avg=179.18, stdev=40.07 00:11:17.747 clat percentiles (usec): 00:11:17.747 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 111], 00:11:17.748 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:17.748 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 215], 00:11:17.748 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 273], 00:11:17.748 | 99.99th=[ 273] 00:11:17.748 bw ( KiB/s): min=12288, max=12288, per=25.31%, avg=12288.00, stdev= 0.00, samples=1 00:11:17.748 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:17.748 lat (usec) : 100=11.80%, 250=87.54%, 500=0.66% 00:11:17.748 cpu : usr=4.50%, sys=7.50%, ctx=5425, majf=0, minf=1 00:11:17.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 issued rwts: total=2560,2865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.748 job2: (groupid=0, jobs=1): err= 0: pid=3165841: Wed Oct 9 01:52:37 2024 00:11:17.748 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:17.748 slat (nsec): min=8661, max=25842, avg=9376.34, stdev=1010.75 00:11:17.748 clat (usec): min=102, max=245, avg=185.82, stdev=21.89 00:11:17.748 lat (usec): min=111, max=255, avg=195.19, stdev=21.90 00:11:17.748 clat percentiles (usec): 00:11:17.748 | 1.00th=[ 112], 5.00th=[ 125], 10.00th=[ 165], 20.00th=[ 178], 00:11:17.748 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:11:17.748 | 70.00th=[ 198], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 210], 00:11:17.748 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 239], 99.95th=[ 243], 00:11:17.748 | 99.99th=[ 245] 00:11:17.748 write: IOPS=2627, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:11:17.748 slat (nsec): min=10872, max=43356, avg=11888.11, stdev=1345.68 00:11:17.748 clat (usec): min=100, max=237, avg=173.55, stdev=20.84 00:11:17.748 lat (usec): min=112, max=265, avg=185.43, stdev=20.84 00:11:17.748 clat percentiles (usec): 00:11:17.748 | 1.00th=[ 108], 5.00th=[ 133], 10.00th=[ 149], 20.00th=[ 161], 00:11:17.748 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:11:17.748 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:11:17.748 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 233], 00:11:17.748 | 99.99th=[ 239] 00:11:17.748 bw ( KiB/s): min=12288, max=12288, per=25.31%, avg=12288.00, stdev= 0.00, samples=1 00:11:17.748 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:17.748 lat (usec) : 250=100.00% 00:11:17.748 cpu : usr=3.40%, sys=5.80%, ctx=5190, majf=0, minf=2 00:11:17.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 issued rwts: total=2560,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.748 job3: (groupid=0, jobs=1): err= 0: pid=3165842: Wed Oct 9 01:52:37 2024 00:11:17.748 read: IOPS=2379, BW=9518KiB/s (9747kB/s)(9528KiB/1001msec) 00:11:17.748 slat (nsec): min=8813, max=25711, avg=9468.14, stdev=1097.00 00:11:17.748 clat (usec): min=112, max=276, avg=196.42, stdev=19.80 00:11:17.748 lat (usec): min=122, max=286, avg=205.89, stdev=19.82 00:11:17.748 clat percentiles (usec): 00:11:17.748 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:11:17.748 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:11:17.748 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 243], 00:11:17.748 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 273], 99.95th=[ 277], 00:11:17.748 | 99.99th=[ 277] 00:11:17.748 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:17.748 slat (nsec): min=10839, max=42273, avg=11986.56, stdev=1702.19 00:11:17.748 clat (usec): min=106, max=283, avg=182.28, stdev=24.54 00:11:17.748 lat (usec): min=117, max=296, avg=194.27, stdev=24.53 00:11:17.748 clat percentiles (usec): 00:11:17.748 | 1.00th=[ 121], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:11:17.748 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:17.748 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 215], 95.00th=[ 227], 00:11:17.748 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 269], 99.95th=[ 277], 00:11:17.748 | 99.99th=[ 285] 00:11:17.748 bw ( KiB/s): min=12056, max=12056, per=24.83%, avg=12056.00, stdev= 0.00, samples=1 00:11:17.748 iops : min= 3014, max= 3014, avg=3014.00, stdev= 0.00, samples=1 00:11:17.748 lat (usec) : 250=97.35%, 500=2.65% 00:11:17.748 cpu : usr=3.20%, sys=5.60%, ctx=4942, majf=0, minf=1 00:11:17.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.748 issued rwts: total=2382,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.748 00:11:17.748 Run status group 0 (all jobs): 00:11:17.748 READ: bw=45.1MiB/s (47.3MB/s), 9518KiB/s-15.8MiB/s (9747kB/s-16.6MB/s), io=45.1MiB (47.3MB), run=1001-1001msec 00:11:17.748 WRITE: bw=47.4MiB/s (49.7MB/s), 9.99MiB/s-16.0MiB/s (10.5MB/s-16.8MB/s), io=47.5MiB (49.8MB), run=1001-1001msec 00:11:17.748 00:11:17.748 Disk stats (read/write): 00:11:17.748 nvme0n1: ios=3338/3584, merge=0/0, ticks=359/373, in_queue=732, util=86.07% 00:11:17.748 nvme0n2: ios=2077/2560, merge=0/0, ticks=345/403, in_queue=748, util=86.66% 00:11:17.748 nvme0n3: ios=2048/2352, merge=0/0, ticks=373/398, in_queue=771, util=88.83% 00:11:17.748 nvme0n4: ios=2048/2180, merge=0/0, ticks=386/376, in_queue=762, util=89.58% 00:11:17.748 01:52:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:17.748 [global] 00:11:17.748 thread=1 00:11:17.748 invalidate=1 00:11:17.748 rw=randwrite 00:11:17.748 time_based=1 00:11:17.748 runtime=1 00:11:17.748 ioengine=libaio 00:11:17.748 direct=1 00:11:17.748 bs=4096 00:11:17.748 iodepth=1 00:11:17.748 norandommap=0 00:11:17.748 numjobs=1 00:11:17.748 00:11:17.748 verify_dump=1 00:11:17.748 verify_backlog=512 00:11:17.748 verify_state_save=0 00:11:17.748 do_verify=1 00:11:17.748 verify=crc32c-intel 00:11:17.748 [job0] 00:11:17.748 filename=/dev/nvme0n1 00:11:17.748 [job1] 00:11:17.748 filename=/dev/nvme0n2 00:11:17.748 [job2] 00:11:17.748 filename=/dev/nvme0n3 00:11:17.748 [job3] 00:11:17.748 filename=/dev/nvme0n4 00:11:17.748 Could not set queue depth (nvme0n1) 00:11:17.748 Could not set queue depth (nvme0n2) 00:11:17.748 Could not set queue depth (nvme0n3) 00:11:17.748 Could not set queue depth (nvme0n4) 00:11:17.748 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.748 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.748 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.748 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.748 fio-3.35 00:11:17.748 Starting 4 threads 00:11:19.127 00:11:19.127 job0: (groupid=0, jobs=1): err= 0: pid=3166134: Wed Oct 9 01:52:38 2024 00:11:19.127 read: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1001msec) 00:11:19.127 slat (nsec): min=8535, max=37065, avg=9210.43, stdev=1269.19 00:11:19.127 clat (usec): min=86, max=225, avg=113.51, stdev=17.71 00:11:19.127 lat (usec): min=96, max=234, avg=122.72, stdev=17.79 00:11:19.127 clat percentiles (usec): 00:11:19.127 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:11:19.127 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 111], 00:11:19.127 | 70.00th=[ 124], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:11:19.127 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 200], 00:11:19.127 | 99.99th=[ 227] 00:11:19.127 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:19.127 slat (nsec): min=10546, max=44578, avg=11555.58, stdev=1201.01 00:11:19.128 clat (usec): min=83, max=164, avg=106.93, stdev=16.24 00:11:19.128 lat (usec): min=95, max=208, avg=118.48, stdev=16.30 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:11:19.128 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:11:19.128 | 70.00th=[ 108], 80.00th=[ 126], 90.00th=[ 137], 95.00th=[ 141], 00:11:19.128 | 99.00th=[ 147], 99.50th=[ 149], 99.90th=[ 155], 99.95th=[ 157], 00:11:19.128 | 99.99th=[ 165] 00:11:19.128 bw ( KiB/s): min=17664, max=17664, per=29.99%, avg=17664.00, stdev= 0.00, samples=1 00:11:19.128 iops : min= 4416, max= 4416, avg=4416.00, stdev= 0.00, samples=1 00:11:19.128 lat (usec) : 100=35.67%, 250=64.33% 00:11:19.128 cpu : usr=5.30%, sys=9.00%, ctx=8105, majf=0, minf=1 00:11:19.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 issued rwts: total=4009,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.128 job1: (groupid=0, jobs=1): err= 0: pid=3166136: Wed Oct 9 01:52:38 2024 00:11:19.128 read: IOPS=3193, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:11:19.128 slat (nsec): min=8408, max=26324, avg=9122.90, stdev=1004.71 00:11:19.128 clat (usec): min=92, max=197, avg=138.12, stdev=20.38 00:11:19.128 lat (usec): min=101, max=206, avg=147.25, stdev=20.34 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 121], 00:11:19.128 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 147], 00:11:19.128 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:11:19.128 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 194], 00:11:19.128 | 99.99th=[ 198] 00:11:19.128 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:19.128 slat (nsec): min=10401, max=75240, avg=11414.86, stdev=1625.66 00:11:19.128 clat (usec): min=84, max=184, avg=131.40, stdev=22.67 00:11:19.128 lat (usec): min=98, max=195, avg=142.81, stdev=22.44 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 102], 00:11:19.128 | 30.00th=[ 121], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 143], 00:11:19.128 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:11:19.128 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 184], 00:11:19.128 | 99.99th=[ 186] 00:11:19.128 bw ( KiB/s): min=16000, max=16000, per=27.17%, avg=16000.00, stdev= 0.00, samples=1 00:11:19.128 iops : min= 4000, max= 4000, avg=4000.00, stdev= 0.00, samples=1 00:11:19.128 lat (usec) : 100=11.12%, 250=88.88% 00:11:19.128 cpu : usr=4.70%, sys=7.20%, ctx=6781, majf=0, minf=1 00:11:19.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 issued rwts: total=3197,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.128 job2: (groupid=0, jobs=1): err= 0: pid=3166137: Wed Oct 9 01:52:38 2024 00:11:19.128 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:19.128 slat (nsec): min=8735, max=28048, avg=9471.81, stdev=1083.55 00:11:19.128 clat (usec): min=93, max=174, avg=122.53, stdev=19.62 00:11:19.128 lat (usec): min=102, max=184, avg=132.00, stdev=19.78 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 104], 00:11:19.128 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 129], 00:11:19.128 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:11:19.128 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 172], 99.95th=[ 174], 00:11:19.128 | 99.99th=[ 176] 00:11:19.128 write: IOPS=3854, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:11:19.128 slat (nsec): min=10659, max=61939, avg=11832.49, stdev=1567.70 00:11:19.128 clat (usec): min=89, max=176, avg=119.29, stdev=18.69 00:11:19.128 lat (usec): min=101, max=238, avg=131.12, stdev=18.96 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 101], 00:11:19.128 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 114], 60.00th=[ 128], 00:11:19.128 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:11:19.128 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 167], 99.95th=[ 174], 00:11:19.128 | 99.99th=[ 178] 00:11:19.128 bw ( KiB/s): min=16384, max=16384, per=27.82%, avg=16384.00, stdev= 0.00, samples=1 00:11:19.128 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:19.128 lat (usec) : 100=11.42%, 250=88.58% 00:11:19.128 cpu : usr=6.40%, sys=7.00%, ctx=7442, majf=0, minf=1 00:11:19.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 issued rwts: total=3584,3858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.128 job3: (groupid=0, jobs=1): err= 0: pid=3166138: Wed Oct 9 01:52:38 2024 00:11:19.128 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:19.128 slat (nsec): min=8765, max=24922, avg=10165.93, stdev=1625.57 00:11:19.128 clat (usec): min=118, max=201, avg=148.05, stdev= 9.48 00:11:19.128 lat (usec): min=127, max=210, avg=158.22, stdev= 8.96 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:11:19.128 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:11:19.128 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 00:11:19.128 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 196], 00:11:19.128 | 99.99th=[ 202] 00:11:19.128 write: IOPS=3196, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:11:19.128 slat (nsec): min=10463, max=97538, avg=12288.67, stdev=2378.38 00:11:19.128 clat (usec): min=107, max=226, avg=142.77, stdev=11.32 00:11:19.128 lat (usec): min=119, max=241, avg=155.06, stdev=10.97 00:11:19.128 clat percentiles (usec): 00:11:19.128 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:11:19.128 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:11:19.128 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:11:19.128 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 190], 00:11:19.128 | 99.99th=[ 227] 00:11:19.128 bw ( KiB/s): min=12736, max=12736, per=21.63%, avg=12736.00, stdev= 0.00, samples=1 00:11:19.128 iops : min= 3184, max= 3184, avg=3184.00, stdev= 0.00, samples=1 00:11:19.128 lat (usec) : 250=100.00% 00:11:19.128 cpu : usr=4.00%, sys=7.80%, ctx=6273, majf=0, minf=1 00:11:19.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.128 issued rwts: total=3072,3200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.128 00:11:19.128 Run status group 0 (all jobs): 00:11:19.128 READ: bw=54.1MiB/s (56.7MB/s), 12.0MiB/s-15.6MiB/s (12.6MB/s-16.4MB/s), io=54.1MiB (56.8MB), run=1001-1001msec 00:11:19.128 WRITE: bw=57.5MiB/s (60.3MB/s), 12.5MiB/s-16.0MiB/s (13.1MB/s-16.8MB/s), io=57.6MiB (60.4MB), run=1001-1001msec 00:11:19.128 00:11:19.128 Disk stats (read/write): 00:11:19.128 nvme0n1: ios=3527/3584, merge=0/0, ticks=379/346, in_queue=725, util=85.97% 00:11:19.128 nvme0n2: ios=2667/3072, merge=0/0, ticks=360/388, in_queue=748, util=86.75% 00:11:19.128 nvme0n3: ios=3072/3394, merge=0/0, ticks=341/375, in_queue=716, util=88.82% 00:11:19.128 nvme0n4: ios=2560/2737, merge=0/0, ticks=367/376, in_queue=743, util=89.67% 00:11:19.128 01:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:19.128 [global] 00:11:19.128 thread=1 00:11:19.128 invalidate=1 00:11:19.128 rw=write 00:11:19.128 time_based=1 00:11:19.128 runtime=1 00:11:19.128 ioengine=libaio 00:11:19.128 direct=1 00:11:19.128 bs=4096 00:11:19.128 iodepth=128 00:11:19.128 norandommap=0 00:11:19.128 numjobs=1 00:11:19.128 00:11:19.128 verify_dump=1 00:11:19.128 verify_backlog=512 00:11:19.128 verify_state_save=0 00:11:19.128 do_verify=1 00:11:19.128 verify=crc32c-intel 00:11:19.128 [job0] 00:11:19.128 filename=/dev/nvme0n1 00:11:19.128 [job1] 00:11:19.128 filename=/dev/nvme0n2 00:11:19.128 [job2] 00:11:19.128 filename=/dev/nvme0n3 00:11:19.128 [job3] 00:11:19.128 filename=/dev/nvme0n4 00:11:19.128 Could not set queue depth (nvme0n1) 00:11:19.128 Could not set queue depth (nvme0n2) 00:11:19.128 Could not set queue depth (nvme0n3) 00:11:19.128 Could not set queue depth (nvme0n4) 00:11:19.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.387 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.387 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.387 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.387 fio-3.35 00:11:19.387 Starting 4 threads 00:11:20.763 00:11:20.763 job0: (groupid=0, jobs=1): err= 0: pid=3166501: Wed Oct 9 01:52:40 2024 00:11:20.763 read: IOPS=7642, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:11:20.763 slat (usec): min=2, max=5399, avg=64.34, stdev=282.02 00:11:20.763 clat (usec): min=2361, max=24416, avg=8616.97, stdev=2774.79 00:11:20.763 lat (usec): min=3848, max=26070, avg=8681.31, stdev=2792.19 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7111], 00:11:20.763 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8094], 00:11:20.763 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[11994], 95.00th=[15533], 00:11:20.763 | 99.00th=[19530], 99.50th=[20841], 99.90th=[21627], 99.95th=[22676], 00:11:20.763 | 99.99th=[24511] 00:11:20.763 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:11:20.763 slat (usec): min=2, max=4390, avg=56.86, stdev=222.57 00:11:20.763 clat (usec): min=1395, max=16219, avg=7488.94, stdev=1621.40 00:11:20.763 lat (usec): min=1442, max=16429, avg=7545.81, stdev=1630.19 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6587], 00:11:20.763 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:11:20.763 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8848], 95.00th=[11207], 00:11:20.763 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15795], 99.95th=[15795], 00:11:20.763 | 99.99th=[16188] 00:11:20.763 bw ( KiB/s): min=28424, max=36104, per=33.72%, avg=32264.00, stdev=5430.58, samples=2 00:11:20.763 iops : min= 7106, max= 9026, avg=8066.00, stdev=1357.65, samples=2 00:11:20.763 lat (msec) : 2=0.01%, 4=0.68%, 10=87.92%, 20=11.08%, 50=0.32% 00:11:20.763 cpu : usr=4.98%, sys=8.76%, ctx=1085, majf=0, minf=2 00:11:20.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.763 issued rwts: total=7681,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.763 job1: (groupid=0, jobs=1): err= 0: pid=3166515: Wed Oct 9 01:52:40 2024 00:11:20.763 read: IOPS=5101, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:11:20.763 slat (usec): min=2, max=5057, avg=93.58, stdev=368.50 00:11:20.763 clat (usec): min=272, max=27300, avg=12094.05, stdev=4839.94 00:11:20.763 lat (usec): min=1079, max=27307, avg=12187.64, stdev=4880.24 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 4178], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7635], 00:11:20.763 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[11469], 60.00th=[13173], 00:11:20.763 | 70.00th=[15401], 80.00th=[16909], 90.00th=[18482], 95.00th=[20841], 00:11:20.763 | 99.00th=[23725], 99.50th=[25035], 99.90th=[25822], 99.95th=[26346], 00:11:20.763 | 99.99th=[27395] 00:11:20.763 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:11:20.763 slat (usec): min=2, max=5312, avg=88.79, stdev=346.64 00:11:20.763 clat (usec): min=1127, max=26912, avg=11524.61, stdev=5272.64 00:11:20.763 lat (usec): min=1200, max=26918, avg=11613.40, stdev=5314.34 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 3752], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7046], 00:11:20.763 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[10028], 60.00th=[12125], 00:11:20.763 | 70.00th=[13829], 80.00th=[15795], 90.00th=[19792], 95.00th=[22676], 00:11:20.763 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26346], 99.95th=[26608], 00:11:20.763 | 99.99th=[26870] 00:11:20.763 bw ( KiB/s): min=19520, max=24526, per=23.02%, avg=22023.00, stdev=3539.78, samples=2 00:11:20.763 iops : min= 4880, max= 6131, avg=5505.50, stdev=884.59, samples=2 00:11:20.763 lat (usec) : 500=0.01% 00:11:20.763 lat (msec) : 2=0.08%, 4=0.86%, 10=46.71%, 20=44.48%, 50=7.84% 00:11:20.763 cpu : usr=2.39%, sys=3.39%, ctx=849, majf=0, minf=1 00:11:20.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.763 issued rwts: total=5127,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.763 job2: (groupid=0, jobs=1): err= 0: pid=3166526: Wed Oct 9 01:52:40 2024 00:11:20.763 read: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1006msec) 00:11:20.763 slat (usec): min=2, max=7490, avg=102.64, stdev=460.17 00:11:20.763 clat (usec): min=2667, max=30136, avg=13122.32, stdev=4375.86 00:11:20.763 lat (usec): min=4189, max=32281, avg=13224.96, stdev=4403.95 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 5800], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 8717], 00:11:20.763 | 30.00th=[ 9765], 40.00th=[11863], 50.00th=[13435], 60.00th=[14222], 00:11:20.763 | 70.00th=[15533], 80.00th=[17171], 90.00th=[18482], 95.00th=[19792], 00:11:20.763 | 99.00th=[23987], 99.50th=[26084], 99.90th=[30016], 99.95th=[30016], 00:11:20.763 | 99.99th=[30016] 00:11:20.763 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:11:20.763 slat (usec): min=2, max=5855, avg=91.51, stdev=395.56 00:11:20.763 clat (usec): min=4223, max=25695, avg=12237.71, stdev=4204.94 00:11:20.763 lat (usec): min=4234, max=26593, avg=12329.22, stdev=4231.33 00:11:20.763 clat percentiles (usec): 00:11:20.763 | 1.00th=[ 5473], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8455], 00:11:20.763 | 30.00th=[ 8848], 40.00th=[10290], 50.00th=[11731], 60.00th=[12911], 00:11:20.763 | 70.00th=[14222], 80.00th=[15926], 90.00th=[17957], 95.00th=[21365], 00:11:20.763 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25560], 99.95th=[25560], 00:11:20.763 | 99.99th=[25822] 00:11:20.763 bw ( KiB/s): min=19984, max=20976, per=21.40%, avg=20480.00, stdev=701.45, samples=2 00:11:20.763 iops : min= 4996, max= 5244, avg=5120.00, stdev=175.36, samples=2 00:11:20.763 lat (msec) : 4=0.03%, 10=34.84%, 20=59.57%, 50=5.56% 00:11:20.763 cpu : usr=3.48%, sys=5.87%, ctx=943, majf=0, minf=1 00:11:20.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.764 issued rwts: total=4927,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.764 job3: (groupid=0, jobs=1): err= 0: pid=3166532: Wed Oct 9 01:52:40 2024 00:11:20.764 read: IOPS=4997, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1005msec) 00:11:20.764 slat (usec): min=2, max=4797, avg=98.66, stdev=430.16 00:11:20.764 clat (usec): min=3145, max=26532, avg=12632.56, stdev=4381.49 00:11:20.764 lat (usec): min=3203, max=26534, avg=12731.21, stdev=4407.20 00:11:20.764 clat percentiles (usec): 00:11:20.764 | 1.00th=[ 3949], 5.00th=[ 5604], 10.00th=[ 6915], 20.00th=[ 8848], 00:11:20.764 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12649], 60.00th=[13435], 00:11:20.764 | 70.00th=[14746], 80.00th=[16450], 90.00th=[17957], 95.00th=[21103], 00:11:20.764 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25560], 99.95th=[25560], 00:11:20.764 | 99.99th=[26608] 00:11:20.764 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:11:20.764 slat (usec): min=2, max=7868, avg=92.66, stdev=433.10 00:11:20.764 clat (usec): min=3262, max=25667, avg=12460.32, stdev=4328.38 00:11:20.764 lat (usec): min=3309, max=25673, avg=12552.98, stdev=4356.30 00:11:20.764 clat percentiles (usec): 00:11:20.764 | 1.00th=[ 3982], 5.00th=[ 5669], 10.00th=[ 7242], 20.00th=[ 8848], 00:11:20.764 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11994], 60.00th=[13304], 00:11:20.764 | 70.00th=[14615], 80.00th=[16319], 90.00th=[17957], 95.00th=[20055], 00:11:20.764 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:11:20.764 | 99.99th=[25560] 00:11:20.764 bw ( KiB/s): min=16384, max=24576, per=21.40%, avg=20480.00, stdev=5792.62, samples=2 00:11:20.764 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:11:20.764 lat (msec) : 4=1.15%, 10=28.14%, 20=65.14%, 50=5.57% 00:11:20.764 cpu : usr=2.59%, sys=6.27%, ctx=904, majf=0, minf=1 00:11:20.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:20.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.764 issued rwts: total=5022,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.764 00:11:20.764 Run status group 0 (all jobs): 00:11:20.764 READ: bw=88.4MiB/s (92.7MB/s), 19.1MiB/s-29.9MiB/s (20.1MB/s-31.3MB/s), io=88.9MiB (93.2MB), run=1005-1006msec 00:11:20.764 WRITE: bw=93.4MiB/s (98.0MB/s), 19.9MiB/s-31.8MiB/s (20.8MB/s-33.4MB/s), io=94.0MiB (98.6MB), run=1005-1006msec 00:11:20.764 00:11:20.764 Disk stats (read/write): 00:11:20.764 nvme0n1: ios=6829/7168, merge=0/0, ticks=48640/47421, in_queue=96061, util=86.47% 00:11:20.764 nvme0n2: ios=4608/4946, merge=0/0, ticks=27003/26186, in_queue=53189, util=86.59% 00:11:20.764 nvme0n3: ios=4096/4099, merge=0/0, ticks=18509/16666, in_queue=35175, util=88.20% 00:11:20.764 nvme0n4: ios=4461/4608, merge=0/0, ticks=22531/21755, in_queue=44286, util=89.47% 00:11:20.764 01:52:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:20.764 [global] 00:11:20.764 thread=1 00:11:20.764 invalidate=1 00:11:20.764 rw=randwrite 00:11:20.764 time_based=1 00:11:20.764 runtime=1 00:11:20.764 ioengine=libaio 00:11:20.764 direct=1 00:11:20.764 bs=4096 00:11:20.764 iodepth=128 00:11:20.764 norandommap=0 00:11:20.764 numjobs=1 00:11:20.764 00:11:20.764 verify_dump=1 00:11:20.764 verify_backlog=512 00:11:20.764 verify_state_save=0 00:11:20.764 do_verify=1 00:11:20.764 verify=crc32c-intel 00:11:20.764 [job0] 00:11:20.764 filename=/dev/nvme0n1 00:11:20.764 [job1] 00:11:20.764 filename=/dev/nvme0n2 00:11:20.764 [job2] 00:11:20.764 filename=/dev/nvme0n3 00:11:20.764 [job3] 00:11:20.764 filename=/dev/nvme0n4 00:11:20.764 Could not set queue depth (nvme0n1) 00:11:20.764 Could not set queue depth (nvme0n2) 00:11:20.764 Could not set queue depth (nvme0n3) 00:11:20.764 Could not set queue depth (nvme0n4) 00:11:21.023 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.023 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.023 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.023 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.023 fio-3.35 00:11:21.023 Starting 4 threads 00:11:22.399 00:11:22.399 job0: (groupid=0, jobs=1): err= 0: pid=3166885: Wed Oct 9 01:52:41 2024 00:11:22.399 read: IOPS=6877, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1004msec) 00:11:22.399 slat (nsec): min=1947, max=5385.0k, avg=67920.96, stdev=323141.99 00:11:22.399 clat (usec): min=2780, max=18123, avg=8724.86, stdev=2780.50 00:11:22.399 lat (usec): min=2787, max=18127, avg=8792.78, stdev=2789.69 00:11:22.399 clat percentiles (usec): 00:11:22.399 | 1.00th=[ 4080], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6390], 00:11:22.399 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8979], 00:11:22.399 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[13829], 00:11:22.399 | 99.00th=[16450], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:11:22.399 | 99.99th=[18220] 00:11:22.399 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:11:22.399 slat (usec): min=2, max=5598, avg=69.29, stdev=332.52 00:11:22.399 clat (usec): min=3112, max=19239, avg=9297.97, stdev=3186.69 00:11:22.399 lat (usec): min=3549, max=19242, avg=9367.26, stdev=3203.04 00:11:22.399 clat percentiles (usec): 00:11:22.399 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 6325], 00:11:22.399 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8717], 60.00th=[10028], 00:11:22.399 | 70.00th=[10945], 80.00th=[12125], 90.00th=[13435], 95.00th=[15270], 00:11:22.399 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:11:22.399 | 99.99th=[19268] 00:11:22.399 bw ( KiB/s): min=27200, max=30144, per=30.09%, avg=28672.00, stdev=2081.72, samples=2 00:11:22.399 iops : min= 6800, max= 7536, avg=7168.00, stdev=520.43, samples=2 00:11:22.399 lat (msec) : 4=0.51%, 10=64.80%, 20=34.69% 00:11:22.399 cpu : usr=4.29%, sys=6.78%, ctx=1254, majf=0, minf=1 00:11:22.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:22.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.399 issued rwts: total=6905,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.399 job1: (groupid=0, jobs=1): err= 0: pid=3166886: Wed Oct 9 01:52:41 2024 00:11:22.399 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:11:22.399 slat (usec): min=2, max=5809, avg=84.37, stdev=415.21 00:11:22.399 clat (usec): min=779, max=24117, avg=11028.40, stdev=3997.64 00:11:22.399 lat (usec): min=1373, max=26240, avg=11112.77, stdev=4014.21 00:11:22.399 clat percentiles (usec): 00:11:22.399 | 1.00th=[ 4015], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7701], 00:11:22.399 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11469], 00:11:22.399 | 70.00th=[12780], 80.00th=[14091], 90.00th=[17171], 95.00th=[18744], 00:11:22.399 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23987], 99.95th=[23987], 00:11:22.399 | 99.99th=[24249] 00:11:22.399 write: IOPS=5987, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec); 0 zone resets 00:11:22.399 slat (usec): min=2, max=7382, avg=82.25, stdev=390.97 00:11:22.399 clat (usec): min=2938, max=24635, avg=10801.81, stdev=3955.78 00:11:22.399 lat (usec): min=2943, max=24666, avg=10884.06, stdev=3977.54 00:11:22.399 clat percentiles (usec): 00:11:22.399 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 7046], 00:11:22.399 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[10290], 60.00th=[11863], 00:11:22.399 | 70.00th=[12911], 80.00th=[14484], 90.00th=[16581], 95.00th=[17957], 00:11:22.399 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22414], 99.95th=[22414], 00:11:22.399 | 99.99th=[24511] 00:11:22.399 bw ( KiB/s): min=22088, max=25032, per=24.73%, avg=23560.00, stdev=2081.72, samples=2 00:11:22.399 iops : min= 5522, max= 6258, avg=5890.00, stdev=520.43, samples=2 00:11:22.399 lat (usec) : 1000=0.01% 00:11:22.399 lat (msec) : 2=0.04%, 4=0.47%, 10=48.00%, 20=49.78%, 50=1.69% 00:11:22.399 cpu : usr=3.49%, sys=5.68%, ctx=1156, majf=0, minf=1 00:11:22.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:22.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.399 issued rwts: total=5632,6017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.400 job2: (groupid=0, jobs=1): err= 0: pid=3166887: Wed Oct 9 01:52:41 2024 00:11:22.400 read: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec) 00:11:22.400 slat (usec): min=2, max=7140, avg=105.94, stdev=468.52 00:11:22.400 clat (usec): min=685, max=25175, avg=13435.33, stdev=4192.29 00:11:22.400 lat (usec): min=4310, max=25179, avg=13541.27, stdev=4205.50 00:11:22.400 clat percentiles (usec): 00:11:22.400 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 9503], 00:11:22.400 | 30.00th=[10945], 40.00th=[12780], 50.00th=[13698], 60.00th=[14746], 00:11:22.400 | 70.00th=[16057], 80.00th=[17171], 90.00th=[18220], 95.00th=[20055], 00:11:22.400 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:11:22.400 | 99.99th=[25297] 00:11:22.400 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:22.400 slat (usec): min=2, max=6521, avg=92.02, stdev=432.76 00:11:22.400 clat (usec): min=4077, max=22864, avg=12305.50, stdev=3587.09 00:11:22.400 lat (usec): min=4080, max=22868, avg=12397.51, stdev=3603.61 00:11:22.400 clat percentiles (usec): 00:11:22.400 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 7832], 20.00th=[ 8979], 00:11:22.400 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[12125], 60.00th=[13566], 00:11:22.400 | 70.00th=[14353], 80.00th=[15401], 90.00th=[16909], 95.00th=[18482], 00:11:22.400 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:11:22.400 | 99.99th=[22938] 00:11:22.400 bw ( KiB/s): min=20472, max=20488, per=21.50%, avg=20480.00, stdev=11.31, samples=2 00:11:22.400 iops : min= 5118, max= 5122, avg=5120.00, stdev= 2.83, samples=2 00:11:22.400 lat (usec) : 750=0.01% 00:11:22.400 lat (msec) : 4=0.01%, 10=28.55%, 20=68.28%, 50=3.15% 00:11:22.400 cpu : usr=2.99%, sys=5.59%, ctx=1088, majf=0, minf=2 00:11:22.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:22.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.400 issued rwts: total=4768,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.400 job3: (groupid=0, jobs=1): err= 0: pid=3166889: Wed Oct 9 01:52:41 2024 00:11:22.400 read: IOPS=5177, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1004msec) 00:11:22.400 slat (usec): min=2, max=7664, avg=96.34, stdev=443.80 00:11:22.400 clat (usec): min=2108, max=26039, avg=12467.04, stdev=4071.82 00:11:22.400 lat (usec): min=3952, max=26047, avg=12563.38, stdev=4093.40 00:11:22.400 clat percentiles (usec): 00:11:22.400 | 1.00th=[ 4883], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 8586], 00:11:22.400 | 30.00th=[ 9241], 40.00th=[10814], 50.00th=[11994], 60.00th=[13435], 00:11:22.400 | 70.00th=[14746], 80.00th=[16057], 90.00th=[18482], 95.00th=[19792], 00:11:22.400 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24511], 99.95th=[24511], 00:11:22.400 | 99.99th=[26084] 00:11:22.400 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:22.400 slat (usec): min=2, max=7816, avg=83.68, stdev=398.86 00:11:22.400 clat (usec): min=3779, max=22201, avg=11020.11, stdev=3382.15 00:11:22.400 lat (usec): min=3790, max=22212, avg=11103.79, stdev=3396.15 00:11:22.400 clat percentiles (usec): 00:11:22.400 | 1.00th=[ 5276], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8225], 00:11:22.400 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11469], 00:11:22.400 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15533], 95.00th=[17695], 00:11:22.400 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21890], 99.95th=[21890], 00:11:22.400 | 99.99th=[22152] 00:11:22.400 bw ( KiB/s): min=21472, max=23192, per=23.44%, avg=22332.00, stdev=1216.22, samples=2 00:11:22.400 iops : min= 5368, max= 5798, avg=5583.00, stdev=304.06, samples=2 00:11:22.400 lat (msec) : 4=0.11%, 10=41.38%, 20=55.81%, 50=2.71% 00:11:22.400 cpu : usr=3.29%, sys=6.18%, ctx=1027, majf=0, minf=1 00:11:22.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:22.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.400 issued rwts: total=5198,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.400 00:11:22.400 Run status group 0 (all jobs): 00:11:22.400 READ: bw=87.5MiB/s (91.7MB/s), 18.6MiB/s-26.9MiB/s (19.5MB/s-28.2MB/s), io=87.9MiB (92.2MB), run=1003-1005msec 00:11:22.400 WRITE: bw=93.0MiB/s (97.6MB/s), 19.9MiB/s-27.9MiB/s (20.9MB/s-29.2MB/s), io=93.5MiB (98.0MB), run=1003-1005msec 00:11:22.400 00:11:22.400 Disk stats (read/write): 00:11:22.400 nvme0n1: ios=6194/6372, merge=0/0, ticks=15246/16078, in_queue=31324, util=85.27% 00:11:22.400 nvme0n2: ios=4742/5120, merge=0/0, ticks=15235/16492, in_queue=31727, util=85.74% 00:11:22.400 nvme0n3: ios=4096/4421, merge=0/0, ticks=16302/15654, in_queue=31956, util=87.58% 00:11:22.400 nvme0n4: ios=4450/4608, merge=0/0, ticks=16581/16223, in_queue=32804, util=88.62% 00:11:22.400 01:52:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:22.400 01:52:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3167042 00:11:22.400 01:52:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:22.400 01:52:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.400 [global] 00:11:22.400 thread=1 00:11:22.400 invalidate=1 00:11:22.400 rw=read 00:11:22.400 time_based=1 00:11:22.400 runtime=10 00:11:22.400 ioengine=libaio 00:11:22.400 direct=1 00:11:22.400 bs=4096 00:11:22.400 iodepth=1 00:11:22.400 norandommap=1 00:11:22.400 numjobs=1 00:11:22.400 00:11:22.400 [job0] 00:11:22.400 filename=/dev/nvme0n1 00:11:22.400 [job1] 00:11:22.400 filename=/dev/nvme0n2 00:11:22.400 [job2] 00:11:22.400 filename=/dev/nvme0n3 00:11:22.400 [job3] 00:11:22.400 filename=/dev/nvme0n4 00:11:22.400 Could not set queue depth (nvme0n1) 00:11:22.400 Could not set queue depth (nvme0n2) 00:11:22.400 Could not set queue depth (nvme0n3) 00:11:22.400 Could not set queue depth (nvme0n4) 00:11:22.400 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.400 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.400 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.400 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.400 fio-3.35 00:11:22.400 Starting 4 threads 00:11:25.690 01:52:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:25.690 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=54902784, buflen=4096 00:11:25.690 fio: pid=3167192, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.690 01:52:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:25.690 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=63717376, buflen=4096 00:11:25.690 fio: pid=3167191, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.690 01:52:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.690 01:52:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:25.690 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11247616, buflen=4096 00:11:25.690 fio: pid=3167189, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.949 01:52:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.949 01:52:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:26.209 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=21544960, buflen=4096 00:11:26.209 fio: pid=3167190, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.209 00:11:26.209 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3167189: Wed Oct 9 01:52:45 2024 00:11:26.209 read: IOPS=6133, BW=24.0MiB/s (25.1MB/s)(74.7MiB/3119msec) 00:11:26.209 slat (usec): min=8, max=30753, avg=13.66, stdev=328.83 00:11:26.209 clat (usec): min=69, max=573, avg=147.26, stdev=46.05 00:11:26.209 lat (usec): min=79, max=30893, avg=160.92, stdev=331.90 00:11:26.209 clat percentiles (usec): 00:11:26.209 | 1.00th=[ 84], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 97], 00:11:26.209 | 30.00th=[ 100], 40.00th=[ 109], 50.00th=[ 174], 60.00th=[ 184], 00:11:26.209 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:11:26.209 | 99.00th=[ 212], 99.50th=[ 215], 99.90th=[ 235], 99.95th=[ 249], 00:11:26.209 | 99.99th=[ 420] 00:11:26.209 bw ( KiB/s): min=19704, max=34544, per=31.06%, avg=24546.00, stdev=5555.29, samples=6 00:11:26.209 iops : min= 4926, max= 8638, avg=6136.67, stdev=1389.64, samples=6 00:11:26.209 lat (usec) : 100=29.73%, 250=70.22%, 500=0.04%, 750=0.01% 00:11:26.209 cpu : usr=2.98%, sys=6.45%, ctx=19137, majf=0, minf=1 00:11:26.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 issued rwts: total=19131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.209 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3167190: Wed Oct 9 01:52:45 2024 00:11:26.209 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(84.5MiB/3530msec) 00:11:26.209 slat (usec): min=8, max=15869, avg=12.83, stdev=217.66 00:11:26.209 clat (usec): min=58, max=301, avg=148.27, stdev=51.28 00:11:26.209 lat (usec): min=77, max=16091, avg=161.10, stdev=223.62 00:11:26.209 clat percentiles (usec): 00:11:26.209 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 84], 00:11:26.209 | 30.00th=[ 99], 40.00th=[ 119], 50.00th=[ 182], 60.00th=[ 188], 00:11:26.209 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 200], 95.00th=[ 204], 00:11:26.209 | 99.00th=[ 212], 99.50th=[ 215], 99.90th=[ 233], 99.95th=[ 239], 00:11:26.209 | 99.99th=[ 253] 00:11:26.209 bw ( KiB/s): min=19560, max=28344, per=28.01%, avg=22136.00, stdev=3664.96, samples=6 00:11:26.209 iops : min= 4890, max= 7086, avg=5533.83, stdev=916.29, samples=6 00:11:26.209 lat (usec) : 100=31.54%, 250=68.45%, 500=0.01% 00:11:26.209 cpu : usr=2.52%, sys=6.91%, ctx=21651, majf=0, minf=2 00:11:26.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 issued rwts: total=21645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.209 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3167191: Wed Oct 9 01:52:45 2024 00:11:26.209 read: IOPS=5296, BW=20.7MiB/s (21.7MB/s)(60.8MiB/2937msec) 00:11:26.209 slat (usec): min=8, max=11839, avg=10.87, stdev=113.46 00:11:26.209 clat (usec): min=84, max=797, avg=174.60, stdev=33.52 00:11:26.209 lat (usec): min=103, max=12030, avg=185.46, stdev=118.35 00:11:26.209 clat percentiles (usec): 00:11:26.209 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 145], 00:11:26.209 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:11:26.209 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 206], 00:11:26.209 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 255], 99.95th=[ 265], 00:11:26.209 | 99.99th=[ 306] 00:11:26.209 bw ( KiB/s): min=19520, max=24904, per=26.88%, avg=21243.00, stdev=2115.86, samples=5 00:11:26.209 iops : min= 4880, max= 6226, avg=5310.60, stdev=529.04, samples=5 00:11:26.209 lat (usec) : 100=0.71%, 250=99.15%, 500=0.13%, 1000=0.01% 00:11:26.209 cpu : usr=2.32%, sys=5.96%, ctx=15559, majf=0, minf=2 00:11:26.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.209 issued rwts: total=15557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.209 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3167192: Wed Oct 9 01:52:45 2024 00:11:26.209 read: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(52.4MiB/2731msec) 00:11:26.209 slat (nsec): min=8667, max=36417, avg=9491.38, stdev=1057.21 00:11:26.209 clat (usec): min=108, max=662, avg=190.81, stdev=11.56 00:11:26.210 lat (usec): min=117, max=672, avg=200.31, stdev=11.55 00:11:26.210 clat percentiles (usec): 00:11:26.210 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:11:26.210 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:11:26.210 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 206], 00:11:26.210 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 265], 99.95th=[ 273], 00:11:26.210 | 99.99th=[ 306] 00:11:26.210 bw ( KiB/s): min=19480, max=20392, per=25.19%, avg=19902.40, stdev=386.88, samples=5 00:11:26.210 iops : min= 4870, max= 5098, avg=4975.60, stdev=96.72, samples=5 00:11:26.210 lat (usec) : 250=99.74%, 500=0.25%, 750=0.01% 00:11:26.210 cpu : usr=2.12%, sys=5.46%, ctx=13405, majf=0, minf=2 00:11:26.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.210 issued rwts: total=13405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.210 00:11:26.210 Run status group 0 (all jobs): 00:11:26.210 READ: bw=77.2MiB/s (80.9MB/s), 19.2MiB/s-24.0MiB/s (20.1MB/s-25.1MB/s), io=272MiB (286MB), run=2731-3530msec 00:11:26.210 00:11:26.210 Disk stats (read/write): 00:11:26.210 nvme0n1: ios=19074/0, merge=0/0, ticks=2654/0, in_queue=2654, util=93.25% 00:11:26.210 nvme0n2: ios=19769/0, merge=0/0, ticks=2969/0, in_queue=2969, util=94.34% 00:11:26.210 nvme0n3: ios=15172/0, merge=0/0, ticks=2521/0, in_queue=2521, util=95.94% 00:11:26.210 nvme0n4: ios=12918/0, merge=0/0, ticks=2390/0, in_queue=2390, util=96.44% 00:11:26.469 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.469 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:26.727 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.727 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.294 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.294 01:52:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.553 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.553 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.119 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.119 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.120 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.120 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3167042 00:11:28.120 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.120 01:52:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:29.055 nvmf hotplug test: fio failed as expected 00:11:29.055 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.313 01:52:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:29.313 rmmod nvme_rdma 00:11:29.313 rmmod nvme_fabrics 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3164777 ']' 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3164777 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3164777 ']' 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3164777 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3164777 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3164777' 00:11:29.314 killing process with pid 3164777 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3164777 00:11:29.314 01:52:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3164777 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:30.690 00:11:30.690 real 0m28.893s 00:11:30.690 user 1m45.838s 00:11:30.690 sys 0m9.990s 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.690 ************************************ 00:11:30.690 END TEST nvmf_fio_target 00:11:30.690 ************************************ 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.690 ************************************ 00:11:30.690 START TEST nvmf_bdevio 00:11:30.690 ************************************ 00:11:30.690 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:30.950 * Looking for test storage... 00:11:30.950 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:30.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.950 --rc genhtml_branch_coverage=1 00:11:30.950 --rc genhtml_function_coverage=1 00:11:30.950 --rc genhtml_legend=1 00:11:30.950 --rc geninfo_all_blocks=1 00:11:30.950 --rc geninfo_unexecuted_blocks=1 00:11:30.950 00:11:30.950 ' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:30.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.950 --rc genhtml_branch_coverage=1 00:11:30.950 --rc genhtml_function_coverage=1 00:11:30.950 --rc genhtml_legend=1 00:11:30.950 --rc geninfo_all_blocks=1 00:11:30.950 --rc geninfo_unexecuted_blocks=1 00:11:30.950 00:11:30.950 ' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:30.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.950 --rc genhtml_branch_coverage=1 00:11:30.950 --rc genhtml_function_coverage=1 00:11:30.950 --rc genhtml_legend=1 00:11:30.950 --rc geninfo_all_blocks=1 00:11:30.950 --rc geninfo_unexecuted_blocks=1 00:11:30.950 00:11:30.950 ' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:30.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.950 --rc genhtml_branch_coverage=1 00:11:30.950 --rc genhtml_function_coverage=1 00:11:30.950 --rc genhtml_legend=1 00:11:30.950 --rc geninfo_all_blocks=1 00:11:30.950 --rc geninfo_unexecuted_blocks=1 00:11:30.950 00:11:30.950 ' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.950 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.951 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.951 01:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:11:37.516 Found 0000:18:00.0 (0x8086 - 0x159b) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:11:37.516 Found 0000:18:00.1 (0x8086 - 0x159b) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@403 -- # modinfo irdma 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:11:37.516 Found net devices under 0000:18:00.0: cvl_0_0 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:11:37.516 Found net devices under 0000:18:00.1: cvl_0_1 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # rdma_device_init 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:37.516 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:37.517 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:37.517 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:11:37.517 altname enp24s0f0np0 00:11:37.517 altname ens785f0np0 00:11:37.517 inet 192.168.100.8/24 scope global cvl_0_0 00:11:37.517 valid_lft forever preferred_lft forever 00:11:37.517 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:11:37.517 valid_lft forever preferred_lft forever 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:37.517 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:37.517 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:11:37.517 altname enp24s0f1np1 00:11:37.517 altname ens785f1np1 00:11:37.517 inet 192.168.100.9/24 scope global cvl_0_1 00:11:37.517 valid_lft forever preferred_lft forever 00:11:37.517 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:11:37.517 valid_lft forever preferred_lft forever 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:37.517 192.168.100.9' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:37.517 192.168.100.9' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # head -n 1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:37.517 192.168.100.9' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # head -n 1 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # tail -n +2 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.517 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3171192 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3171192 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3171192 ']' 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.776 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.777 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.777 01:52:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.777 [2024-10-09 01:52:57.426754] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:11:37.777 [2024-10-09 01:52:57.426879] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.777 [2024-10-09 01:52:57.559511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.035 [2024-10-09 01:52:57.764381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.035 [2024-10-09 01:52:57.764442] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.035 [2024-10-09 01:52:57.764456] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.035 [2024-10-09 01:52:57.764472] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.035 [2024-10-09 01:52:57.764483] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.035 [2024-10-09 01:52:57.766881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.035 [2024-10-09 01:52:57.766966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.035 [2024-10-09 01:52:57.767031] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.035 [2024-10-09 01:52:57.767053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 [2024-10-09 01:52:58.314547] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:38.600 [2024-10-09 01:52:58.324148] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:38.600 [2024-10-09 01:52:58.324185] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 Malloc0 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.600 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.858 [2024-10-09 01:52:58.422566] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:38.858 { 00:11:38.858 "params": { 00:11:38.858 "name": "Nvme$subsystem", 00:11:38.858 "trtype": "$TEST_TRANSPORT", 00:11:38.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.858 "adrfam": "ipv4", 00:11:38.858 "trsvcid": "$NVMF_PORT", 00:11:38.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.858 "hdgst": ${hdgst:-false}, 00:11:38.858 "ddgst": ${ddgst:-false} 00:11:38.858 }, 00:11:38.858 "method": "bdev_nvme_attach_controller" 00:11:38.858 } 00:11:38.858 EOF 00:11:38.858 )") 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:38.858 01:52:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:38.858 "params": { 00:11:38.858 "name": "Nvme1", 00:11:38.858 "trtype": "rdma", 00:11:38.858 "traddr": "192.168.100.8", 00:11:38.858 "adrfam": "ipv4", 00:11:38.858 "trsvcid": "4420", 00:11:38.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:38.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:38.858 "hdgst": false, 00:11:38.858 "ddgst": false 00:11:38.858 }, 00:11:38.858 "method": "bdev_nvme_attach_controller" 00:11:38.858 }' 00:11:38.858 [2024-10-09 01:52:58.509932] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:11:38.858 [2024-10-09 01:52:58.510031] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171395 ] 00:11:38.858 [2024-10-09 01:52:58.636597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.116 [2024-10-09 01:52:58.844216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.116 [2024-10-09 01:52:58.844274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.116 [2024-10-09 01:52:58.844279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.682 I/O targets: 00:11:39.682 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:39.682 00:11:39.682 00:11:39.682 CUnit - A unit testing framework for C - Version 2.1-3 00:11:39.682 http://cunit.sourceforge.net/ 00:11:39.682 00:11:39.682 00:11:39.682 Suite: bdevio tests on: Nvme1n1 00:11:39.682 Test: blockdev write read block ...passed 00:11:39.682 Test: blockdev write zeroes read block ...passed 00:11:39.682 Test: blockdev write zeroes read no split ...passed 00:11:39.682 Test: blockdev write zeroes read split ...passed 00:11:39.682 Test: blockdev write zeroes read split partial ...passed 00:11:39.682 Test: blockdev reset ...[2024-10-09 01:52:59.332991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:39.682 [2024-10-09 01:52:59.369769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:39.682 [2024-10-09 01:52:59.405416] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:39.682 passed 00:11:39.682 Test: blockdev write read 8 blocks ...passed 00:11:39.682 Test: blockdev write read size > 128k ...passed 00:11:39.682 Test: blockdev write read invalid size ...passed 00:11:39.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:39.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:39.682 Test: blockdev write read max offset ...passed 00:11:39.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:39.682 Test: blockdev writev readv 8 blocks ...passed 00:11:39.682 Test: blockdev writev readv 30 x 1block ...passed 00:11:39.682 Test: blockdev writev readv block ...passed 00:11:39.682 Test: blockdev writev readv size > 128k ...passed 00:11:39.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:39.682 Test: blockdev comparev and writev ...[2024-10-09 01:52:59.411511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.411564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.411588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.411605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.411822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.411844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.411860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.411875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.412117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.412151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.412363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.682 [2024-10-09 01:52:59.412393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:39.682 passed 00:11:39.682 Test: blockdev nvme passthru rw ...passed 00:11:39.682 Test: blockdev nvme passthru vendor specific ...[2024-10-09 01:52:59.412764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:39.682 [2024-10-09 01:52:59.412788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:39.682 [2024-10-09 01:52:59.412871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.412936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:39.682 [2024-10-09 01:52:59.412954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:11:39.682 [2024-10-09 01:52:59.413018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:39.682 [2024-10-09 01:52:59.413038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:11:39.682 passed 00:11:39.682 Test: blockdev nvme admin passthru ...passed 00:11:39.682 Test: blockdev copy ...passed 00:11:39.682 00:11:39.682 Run Summary: Type Total Ran Passed Failed Inactive 00:11:39.682 suites 1 1 n/a 0 0 00:11:39.682 tests 23 23 23 0 0 00:11:39.682 asserts 152 152 152 0 n/a 00:11:39.682 00:11:39.682 Elapsed time = 0.408 seconds 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:41.055 rmmod nvme_rdma 00:11:41.055 rmmod nvme_fabrics 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3171192 ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3171192 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3171192 ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3171192 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3171192 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3171192' 00:11:41.055 killing process with pid 3171192 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3171192 00:11:41.055 01:53:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3171192 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:42.429 00:11:42.429 real 0m11.598s 00:11:42.429 user 0m22.094s 00:11:42.429 sys 0m5.896s 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.429 ************************************ 00:11:42.429 END TEST nvmf_bdevio 00:11:42.429 ************************************ 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:42.429 00:11:42.429 real 4m33.875s 00:11:42.429 user 11m41.546s 00:11:42.429 sys 1m38.958s 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.429 ************************************ 00:11:42.429 END TEST nvmf_target_core 00:11:42.429 ************************************ 00:11:42.429 01:53:02 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:42.429 01:53:02 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.429 01:53:02 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.429 01:53:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:42.429 ************************************ 00:11:42.429 START TEST nvmf_target_extra 00:11:42.429 ************************************ 00:11:42.429 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:42.689 * Looking for test storage... 00:11:42.689 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.689 --rc genhtml_branch_coverage=1 00:11:42.689 --rc genhtml_function_coverage=1 00:11:42.689 --rc genhtml_legend=1 00:11:42.689 --rc geninfo_all_blocks=1 00:11:42.689 --rc geninfo_unexecuted_blocks=1 00:11:42.689 00:11:42.689 ' 00:11:42.689 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.689 --rc genhtml_branch_coverage=1 00:11:42.690 --rc genhtml_function_coverage=1 00:11:42.690 --rc genhtml_legend=1 00:11:42.690 --rc geninfo_all_blocks=1 00:11:42.690 --rc geninfo_unexecuted_blocks=1 00:11:42.690 00:11:42.690 ' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.690 --rc genhtml_branch_coverage=1 00:11:42.690 --rc genhtml_function_coverage=1 00:11:42.690 --rc genhtml_legend=1 00:11:42.690 --rc geninfo_all_blocks=1 00:11:42.690 --rc geninfo_unexecuted_blocks=1 00:11:42.690 00:11:42.690 ' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.690 --rc genhtml_branch_coverage=1 00:11:42.690 --rc genhtml_function_coverage=1 00:11:42.690 --rc genhtml_legend=1 00:11:42.690 --rc geninfo_all_blocks=1 00:11:42.690 --rc geninfo_unexecuted_blocks=1 00:11:42.690 00:11:42.690 ' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.690 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.690 ************************************ 00:11:42.690 START TEST nvmf_example 00:11:42.690 ************************************ 00:11:42.690 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:42.951 * Looking for test storage... 00:11:42.951 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:42.951 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.952 --rc genhtml_branch_coverage=1 00:11:42.952 --rc genhtml_function_coverage=1 00:11:42.952 --rc genhtml_legend=1 00:11:42.952 --rc geninfo_all_blocks=1 00:11:42.952 --rc geninfo_unexecuted_blocks=1 00:11:42.952 00:11:42.952 ' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.952 --rc genhtml_branch_coverage=1 00:11:42.952 --rc genhtml_function_coverage=1 00:11:42.952 --rc genhtml_legend=1 00:11:42.952 --rc geninfo_all_blocks=1 00:11:42.952 --rc geninfo_unexecuted_blocks=1 00:11:42.952 00:11:42.952 ' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.952 --rc genhtml_branch_coverage=1 00:11:42.952 --rc genhtml_function_coverage=1 00:11:42.952 --rc genhtml_legend=1 00:11:42.952 --rc geninfo_all_blocks=1 00:11:42.952 --rc geninfo_unexecuted_blocks=1 00:11:42.952 00:11:42.952 ' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.952 --rc genhtml_branch_coverage=1 00:11:42.952 --rc genhtml_function_coverage=1 00:11:42.952 --rc genhtml_legend=1 00:11:42.952 --rc geninfo_all_blocks=1 00:11:42.952 --rc geninfo_unexecuted_blocks=1 00:11:42.952 00:11:42.952 ' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.952 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:42.952 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.953 01:53:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:11:49.521 Found 0000:18:00.0 (0x8086 - 0x159b) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:11:49.521 Found 0000:18:00.1 (0x8086 - 0x159b) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:49.521 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@403 -- # modinfo irdma 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:11:49.522 Found net devices under 0000:18:00.0: cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:11:49.522 Found net devices under 0000:18:00.1: cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # rdma_device_init 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:49.522 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:49.522 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:11:49.522 altname enp24s0f0np0 00:11:49.522 altname ens785f0np0 00:11:49.522 inet 192.168.100.8/24 scope global cvl_0_0 00:11:49.522 valid_lft forever preferred_lft forever 00:11:49.522 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:11:49.522 valid_lft forever preferred_lft forever 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:49.522 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:49.522 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:11:49.522 altname enp24s0f1np1 00:11:49.522 altname ens785f1np1 00:11:49.522 inet 192.168.100.9/24 scope global cvl_0_1 00:11:49.522 valid_lft forever preferred_lft forever 00:11:49.522 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:11:49.522 valid_lft forever preferred_lft forever 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.522 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:49.523 192.168.100.9' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:49.523 192.168.100.9' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # head -n 1 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:49.523 192.168.100.9' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # tail -n +2 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # head -n 1 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.523 01:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3174897 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3174897 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3174897 ']' 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.523 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.089 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.089 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:50.089 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:50.089 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.089 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.347 01:53:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:50.347 01:53:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:02.548 Initializing NVMe Controllers 00:12:02.548 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:02.548 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:02.548 Initialization complete. Launching workers. 00:12:02.548 ======================================================== 00:12:02.548 Latency(us) 00:12:02.548 Device Information : IOPS MiB/s Average min max 00:12:02.548 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 21792.60 85.13 2936.14 755.25 20016.78 00:12:02.548 ======================================================== 00:12:02.548 Total : 21792.60 85.13 2936.14 755.25 20016.78 00:12:02.548 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:02.548 rmmod nvme_rdma 00:12:02.548 rmmod nvme_fabrics 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3174897 ']' 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3174897 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3174897 ']' 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3174897 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.548 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3174897 00:12:02.549 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:02.549 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:02.549 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3174897' 00:12:02.549 killing process with pid 3174897 00:12:02.549 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3174897 00:12:02.549 01:53:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3174897 00:12:03.116 nvmf threads initialize successfully 00:12:03.116 bdev subsystem init successfully 00:12:03.116 created a nvmf target service 00:12:03.116 create targets's poll groups done 00:12:03.116 all subsystems of target started 00:12:03.116 nvmf target is running 00:12:03.116 all subsystems of target stopped 00:12:03.116 destroy targets's poll groups done 00:12:03.116 destroyed the nvmf target service 00:12:03.116 bdev subsystem finish successfully 00:12:03.116 nvmf threads destroy successfully 00:12:03.116 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:03.116 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:03.116 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:03.116 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.116 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:03.438 00:12:03.438 real 0m20.475s 00:12:03.438 user 0m55.113s 00:12:03.438 sys 0m5.532s 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:03.438 ************************************ 00:12:03.438 END TEST nvmf_example 00:12:03.438 ************************************ 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.438 01:53:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.438 ************************************ 00:12:03.438 START TEST nvmf_filesystem 00:12:03.438 ************************************ 00:12:03.438 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:03.438 * Looking for test storage... 00:12:03.438 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.438 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.439 --rc genhtml_branch_coverage=1 00:12:03.439 --rc genhtml_function_coverage=1 00:12:03.439 --rc genhtml_legend=1 00:12:03.439 --rc geninfo_all_blocks=1 00:12:03.439 --rc geninfo_unexecuted_blocks=1 00:12:03.439 00:12:03.439 ' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.439 --rc genhtml_branch_coverage=1 00:12:03.439 --rc genhtml_function_coverage=1 00:12:03.439 --rc genhtml_legend=1 00:12:03.439 --rc geninfo_all_blocks=1 00:12:03.439 --rc geninfo_unexecuted_blocks=1 00:12:03.439 00:12:03.439 ' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.439 --rc genhtml_branch_coverage=1 00:12:03.439 --rc genhtml_function_coverage=1 00:12:03.439 --rc genhtml_legend=1 00:12:03.439 --rc geninfo_all_blocks=1 00:12:03.439 --rc geninfo_unexecuted_blocks=1 00:12:03.439 00:12:03.439 ' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.439 --rc genhtml_branch_coverage=1 00:12:03.439 --rc genhtml_function_coverage=1 00:12:03.439 --rc genhtml_legend=1 00:12:03.439 --rc geninfo_all_blocks=1 00:12:03.439 --rc geninfo_unexecuted_blocks=1 00:12:03.439 00:12:03.439 ' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:12:03.439 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:12:03.440 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:03.440 #define SPDK_CONFIG_H 00:12:03.440 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:03.440 #define SPDK_CONFIG_APPS 1 00:12:03.440 #define SPDK_CONFIG_ARCH native 00:12:03.440 #define SPDK_CONFIG_ASAN 1 00:12:03.440 #undef SPDK_CONFIG_AVAHI 00:12:03.440 #undef SPDK_CONFIG_CET 00:12:03.440 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:03.440 #define SPDK_CONFIG_COVERAGE 1 00:12:03.440 #define SPDK_CONFIG_CROSS_PREFIX 00:12:03.440 #undef SPDK_CONFIG_CRYPTO 00:12:03.440 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:03.440 #undef SPDK_CONFIG_CUSTOMOCF 00:12:03.440 #undef SPDK_CONFIG_DAOS 00:12:03.440 #define SPDK_CONFIG_DAOS_DIR 00:12:03.440 #define SPDK_CONFIG_DEBUG 1 00:12:03.440 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:03.440 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:12:03.440 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:03.440 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:03.440 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:03.440 #undef SPDK_CONFIG_DPDK_UADK 00:12:03.440 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:12:03.440 #define SPDK_CONFIG_EXAMPLES 1 00:12:03.440 #undef SPDK_CONFIG_FC 00:12:03.440 #define SPDK_CONFIG_FC_PATH 00:12:03.440 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:03.440 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:03.440 #define SPDK_CONFIG_FSDEV 1 00:12:03.440 #undef SPDK_CONFIG_FUSE 00:12:03.440 #undef SPDK_CONFIG_FUZZER 00:12:03.440 #define SPDK_CONFIG_FUZZER_LIB 00:12:03.440 #undef SPDK_CONFIG_GOLANG 00:12:03.440 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:03.440 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:03.440 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:03.440 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:03.440 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:03.440 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:03.440 #undef SPDK_CONFIG_HAVE_LZ4 00:12:03.440 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:03.440 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:03.440 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:03.441 #define SPDK_CONFIG_IDXD 1 00:12:03.441 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:03.441 #undef SPDK_CONFIG_IPSEC_MB 00:12:03.441 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:03.441 #define SPDK_CONFIG_ISAL 1 00:12:03.441 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:03.441 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:03.441 #define SPDK_CONFIG_LIBDIR 00:12:03.441 #undef SPDK_CONFIG_LTO 00:12:03.441 #define SPDK_CONFIG_MAX_LCORES 128 00:12:03.441 #define SPDK_CONFIG_NVME_CUSE 1 00:12:03.441 #undef SPDK_CONFIG_OCF 00:12:03.441 #define SPDK_CONFIG_OCF_PATH 00:12:03.441 #define SPDK_CONFIG_OPENSSL_PATH 00:12:03.441 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:03.441 #define SPDK_CONFIG_PGO_DIR 00:12:03.441 #undef SPDK_CONFIG_PGO_USE 00:12:03.441 #define SPDK_CONFIG_PREFIX /usr/local 00:12:03.441 #undef SPDK_CONFIG_RAID5F 00:12:03.441 #undef SPDK_CONFIG_RBD 00:12:03.441 #define SPDK_CONFIG_RDMA 1 00:12:03.441 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:03.441 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:03.441 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:03.441 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:03.441 #define SPDK_CONFIG_SHARED 1 00:12:03.441 #undef SPDK_CONFIG_SMA 00:12:03.441 #define SPDK_CONFIG_TESTS 1 00:12:03.441 #undef SPDK_CONFIG_TSAN 00:12:03.441 #define SPDK_CONFIG_UBLK 1 00:12:03.441 #define SPDK_CONFIG_UBSAN 1 00:12:03.441 #undef SPDK_CONFIG_UNIT_TESTS 00:12:03.441 #undef SPDK_CONFIG_URING 00:12:03.441 #define SPDK_CONFIG_URING_PATH 00:12:03.441 #undef SPDK_CONFIG_URING_ZNS 00:12:03.441 #undef SPDK_CONFIG_USDT 00:12:03.441 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:03.441 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:03.441 #undef SPDK_CONFIG_VFIO_USER 00:12:03.441 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:03.441 #define SPDK_CONFIG_VHOST 1 00:12:03.441 #define SPDK_CONFIG_VIRTIO 1 00:12:03.441 #undef SPDK_CONFIG_VTUNE 00:12:03.441 #define SPDK_CONFIG_VTUNE_DIR 00:12:03.441 #define SPDK_CONFIG_WERROR 1 00:12:03.441 #define SPDK_CONFIG_WPDK_DIR 00:12:03.441 #undef SPDK_CONFIG_XNVME 00:12:03.441 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:03.441 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:03.737 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3176733 ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3176733 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.p0eu6c 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.p0eu6c/tests/target /tmp/spdk.p0eu6c 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=87430406144 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=94510731264 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7080325120 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47240568832 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47255363584 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=14794752 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=18879270912 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=18902147072 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22876160 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47254867968 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47255367680 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=499712 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9451057152 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9451069440 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:03.738 * Looking for test storage... 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=87430406144 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9294917632 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.738 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.738 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:03.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.739 --rc genhtml_branch_coverage=1 00:12:03.739 --rc genhtml_function_coverage=1 00:12:03.739 --rc genhtml_legend=1 00:12:03.739 --rc geninfo_all_blocks=1 00:12:03.739 --rc geninfo_unexecuted_blocks=1 00:12:03.739 00:12:03.739 ' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:03.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.739 --rc genhtml_branch_coverage=1 00:12:03.739 --rc genhtml_function_coverage=1 00:12:03.739 --rc genhtml_legend=1 00:12:03.739 --rc geninfo_all_blocks=1 00:12:03.739 --rc geninfo_unexecuted_blocks=1 00:12:03.739 00:12:03.739 ' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:03.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.739 --rc genhtml_branch_coverage=1 00:12:03.739 --rc genhtml_function_coverage=1 00:12:03.739 --rc genhtml_legend=1 00:12:03.739 --rc geninfo_all_blocks=1 00:12:03.739 --rc geninfo_unexecuted_blocks=1 00:12:03.739 00:12:03.739 ' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:03.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.739 --rc genhtml_branch_coverage=1 00:12:03.739 --rc genhtml_function_coverage=1 00:12:03.739 --rc genhtml_legend=1 00:12:03.739 --rc geninfo_all_blocks=1 00:12:03.739 --rc geninfo_unexecuted_blocks=1 00:12:03.739 00:12:03.739 ' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.739 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.739 01:53:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:12:10.381 Found 0000:18:00.0 (0x8086 - 0x159b) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:12:10.381 Found 0000:18:00.1 (0x8086 - 0x159b) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@403 -- # modinfo irdma 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.381 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:12:10.382 Found net devices under 0000:18:00.0: cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:12:10.382 Found net devices under 0000:18:00.1: cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # rdma_device_init 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:10.382 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:10.382 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:12:10.382 altname enp24s0f0np0 00:12:10.382 altname ens785f0np0 00:12:10.382 inet 192.168.100.8/24 scope global cvl_0_0 00:12:10.382 valid_lft forever preferred_lft forever 00:12:10.382 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:12:10.382 valid_lft forever preferred_lft forever 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:10.382 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:10.382 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:12:10.382 altname enp24s0f1np1 00:12:10.382 altname ens785f1np1 00:12:10.382 inet 192.168.100.9/24 scope global cvl_0_1 00:12:10.382 valid_lft forever preferred_lft forever 00:12:10.382 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:12:10.382 valid_lft forever preferred_lft forever 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.382 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:10.383 192.168.100.9' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:10.383 192.168.100.9' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # head -n 1 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:10.383 192.168.100.9' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # tail -n +2 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # head -n 1 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.383 ************************************ 00:12:10.383 START TEST nvmf_filesystem_no_in_capsule 00:12:10.383 ************************************ 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3179650 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3179650 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3179650 ']' 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.383 01:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.383 [2024-10-09 01:53:29.750252] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:12:10.383 [2024-10-09 01:53:29.750354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.383 [2024-10-09 01:53:29.881902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.383 [2024-10-09 01:53:30.079991] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.383 [2024-10-09 01:53:30.080042] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.383 [2024-10-09 01:53:30.080056] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.383 [2024-10-09 01:53:30.080071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.383 [2024-10-09 01:53:30.080081] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.383 [2024-10-09 01:53:30.082382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.383 [2024-10-09 01:53:30.082398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.383 [2024-10-09 01:53:30.082422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.383 [2024-10-09 01:53:30.082430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.950 [2024-10-09 01:53:30.627771] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:10.950 [2024-10-09 01:53:30.644566] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:10.950 [2024-10-09 01:53:30.654256] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:10.950 [2024-10-09 01:53:30.654295] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.950 01:53:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 Malloc1 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 [2024-10-09 01:53:31.217031] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:11.518 { 00:12:11.518 "name": "Malloc1", 00:12:11.518 "aliases": [ 00:12:11.518 "cd1ad07c-b058-4460-8414-9382019d3602" 00:12:11.518 ], 00:12:11.518 "product_name": "Malloc disk", 00:12:11.518 "block_size": 512, 00:12:11.518 "num_blocks": 1048576, 00:12:11.518 "uuid": "cd1ad07c-b058-4460-8414-9382019d3602", 00:12:11.518 "assigned_rate_limits": { 00:12:11.518 "rw_ios_per_sec": 0, 00:12:11.518 "rw_mbytes_per_sec": 0, 00:12:11.518 "r_mbytes_per_sec": 0, 00:12:11.518 "w_mbytes_per_sec": 0 00:12:11.518 }, 00:12:11.518 "claimed": true, 00:12:11.518 "claim_type": "exclusive_write", 00:12:11.518 "zoned": false, 00:12:11.518 "supported_io_types": { 00:12:11.518 "read": true, 00:12:11.518 "write": true, 00:12:11.518 "unmap": true, 00:12:11.518 "flush": true, 00:12:11.518 "reset": true, 00:12:11.518 "nvme_admin": false, 00:12:11.518 "nvme_io": false, 00:12:11.518 "nvme_io_md": false, 00:12:11.518 "write_zeroes": true, 00:12:11.518 "zcopy": true, 00:12:11.518 "get_zone_info": false, 00:12:11.518 "zone_management": false, 00:12:11.518 "zone_append": false, 00:12:11.518 "compare": false, 00:12:11.518 "compare_and_write": false, 00:12:11.518 "abort": true, 00:12:11.518 "seek_hole": false, 00:12:11.518 "seek_data": false, 00:12:11.518 "copy": true, 00:12:11.518 "nvme_iov_md": false 00:12:11.518 }, 00:12:11.518 "memory_domains": [ 00:12:11.518 { 00:12:11.518 "dma_device_id": "system", 00:12:11.518 "dma_device_type": 1 00:12:11.518 }, 00:12:11.518 { 00:12:11.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.518 "dma_device_type": 2 00:12:11.518 } 00:12:11.518 ], 00:12:11.518 "driver_specific": {} 00:12:11.518 } 00:12:11.518 ]' 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:11.518 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:11.777 01:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:14.310 01:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.245 ************************************ 00:12:15.245 START TEST filesystem_ext4 00:12:15.245 ************************************ 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:15.245 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:15.246 mke2fs 1.47.0 (5-Feb-2023) 00:12:15.246 Discarding device blocks: 0/522240 done 00:12:15.246 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:15.246 Filesystem UUID: 434f669b-cd81-47e1-b479-3be152b78ac3 00:12:15.246 Superblock backups stored on blocks: 00:12:15.246 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:15.246 00:12:15.246 Allocating group tables: 0/64 done 00:12:15.246 Writing inode tables: 0/64 done 00:12:15.246 Creating journal (8192 blocks): done 00:12:15.246 Writing superblocks and filesystem accounting information: 0/64 done 00:12:15.246 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:15.246 01:53:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3179650 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.246 00:12:15.246 real 0m0.227s 00:12:15.246 user 0m0.028s 00:12:15.246 sys 0m0.076s 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.246 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:15.246 ************************************ 00:12:15.246 END TEST filesystem_ext4 00:12:15.246 ************************************ 00:12:15.504 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:15.504 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.504 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.504 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.504 ************************************ 00:12:15.504 START TEST filesystem_btrfs 00:12:15.504 ************************************ 00:12:15.504 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:15.505 btrfs-progs v6.8.1 00:12:15.505 See https://btrfs.readthedocs.io for more information. 00:12:15.505 00:12:15.505 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:15.505 NOTE: several default settings have changed in version 5.15, please make sure 00:12:15.505 this does not affect your deployments: 00:12:15.505 - DUP for metadata (-m dup) 00:12:15.505 - enabled no-holes (-O no-holes) 00:12:15.505 - enabled free-space-tree (-R free-space-tree) 00:12:15.505 00:12:15.505 Label: (null) 00:12:15.505 UUID: 6e8c040e-ccb4-4ce5-9d88-f4ab024dd4c3 00:12:15.505 Node size: 16384 00:12:15.505 Sector size: 4096 (CPU page size: 4096) 00:12:15.505 Filesystem size: 510.00MiB 00:12:15.505 Block group profiles: 00:12:15.505 Data: single 8.00MiB 00:12:15.505 Metadata: DUP 32.00MiB 00:12:15.505 System: DUP 8.00MiB 00:12:15.505 SSD detected: yes 00:12:15.505 Zoned device: no 00:12:15.505 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:15.505 Checksum: crc32c 00:12:15.505 Number of devices: 1 00:12:15.505 Devices: 00:12:15.505 ID SIZE PATH 00:12:15.505 1 510.00MiB /dev/nvme0n1p1 00:12:15.505 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:15.505 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3179650 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.764 00:12:15.764 real 0m0.276s 00:12:15.764 user 0m0.029s 00:12:15.764 sys 0m0.133s 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.764 ************************************ 00:12:15.764 END TEST filesystem_btrfs 00:12:15.764 ************************************ 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.764 ************************************ 00:12:15.764 START TEST filesystem_xfs 00:12:15.764 ************************************ 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:15.764 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:16.023 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:16.023 = sectsz=512 attr=2, projid32bit=1 00:12:16.023 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:16.023 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:16.023 data = bsize=4096 blocks=130560, imaxpct=25 00:12:16.023 = sunit=0 swidth=0 blks 00:12:16.023 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:16.023 log =internal log bsize=4096 blocks=16384, version=2 00:12:16.023 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:16.023 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:16.023 Discarding blocks...Done. 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3179650 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.023 00:12:16.023 real 0m0.230s 00:12:16.023 user 0m0.031s 00:12:16.023 sys 0m0.077s 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.023 ************************************ 00:12:16.023 END TEST filesystem_xfs 00:12:16.023 ************************************ 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:16.023 01:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3179650 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3179650 ']' 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3179650 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3179650 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3179650' 00:12:16.962 killing process with pid 3179650 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3179650 00:12:16.962 01:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3179650 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:20.244 00:12:20.244 real 0m9.837s 00:12:20.244 user 0m36.331s 00:12:20.244 sys 0m1.517s 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.244 ************************************ 00:12:20.244 END TEST nvmf_filesystem_no_in_capsule 00:12:20.244 ************************************ 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.244 ************************************ 00:12:20.244 START TEST nvmf_filesystem_in_capsule 00:12:20.244 ************************************ 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3181094 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3181094 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3181094 ']' 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.244 01:53:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.244 [2024-10-09 01:53:39.685336] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:12:20.244 [2024-10-09 01:53:39.685447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.244 [2024-10-09 01:53:39.818428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.244 [2024-10-09 01:53:40.026476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.244 [2024-10-09 01:53:40.026542] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.244 [2024-10-09 01:53:40.026557] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.244 [2024-10-09 01:53:40.026574] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.244 [2024-10-09 01:53:40.026585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.244 [2024-10-09 01:53:40.029009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.244 [2024-10-09 01:53:40.029075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.244 [2024-10-09 01:53:40.029096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.244 [2024-10-09 01:53:40.029105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.812 [2024-10-09 01:53:40.560903] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:20.812 [2024-10-09 01:53:40.570665] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:20.812 [2024-10-09 01:53:40.570707] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.812 01:53:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.380 Malloc1 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.380 [2024-10-09 01:53:41.147137] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.380 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:21.380 { 00:12:21.380 "name": "Malloc1", 00:12:21.380 "aliases": [ 00:12:21.380 "944e6763-93bf-4bbf-abb9-dc43efffdb83" 00:12:21.380 ], 00:12:21.380 "product_name": "Malloc disk", 00:12:21.380 "block_size": 512, 00:12:21.381 "num_blocks": 1048576, 00:12:21.381 "uuid": "944e6763-93bf-4bbf-abb9-dc43efffdb83", 00:12:21.381 "assigned_rate_limits": { 00:12:21.381 "rw_ios_per_sec": 0, 00:12:21.381 "rw_mbytes_per_sec": 0, 00:12:21.381 "r_mbytes_per_sec": 0, 00:12:21.381 "w_mbytes_per_sec": 0 00:12:21.381 }, 00:12:21.381 "claimed": true, 00:12:21.381 "claim_type": "exclusive_write", 00:12:21.381 "zoned": false, 00:12:21.381 "supported_io_types": { 00:12:21.381 "read": true, 00:12:21.381 "write": true, 00:12:21.381 "unmap": true, 00:12:21.381 "flush": true, 00:12:21.381 "reset": true, 00:12:21.381 "nvme_admin": false, 00:12:21.381 "nvme_io": false, 00:12:21.381 "nvme_io_md": false, 00:12:21.381 "write_zeroes": true, 00:12:21.381 "zcopy": true, 00:12:21.381 "get_zone_info": false, 00:12:21.381 "zone_management": false, 00:12:21.381 "zone_append": false, 00:12:21.381 "compare": false, 00:12:21.381 "compare_and_write": false, 00:12:21.381 "abort": true, 00:12:21.381 "seek_hole": false, 00:12:21.381 "seek_data": false, 00:12:21.381 "copy": true, 00:12:21.381 "nvme_iov_md": false 00:12:21.381 }, 00:12:21.381 "memory_domains": [ 00:12:21.381 { 00:12:21.381 "dma_device_id": "system", 00:12:21.381 "dma_device_type": 1 00:12:21.381 }, 00:12:21.381 { 00:12:21.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.381 "dma_device_type": 2 00:12:21.381 } 00:12:21.381 ], 00:12:21.381 "driver_specific": {} 00:12:21.381 } 00:12:21.381 ]' 00:12:21.381 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:21.639 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:21.898 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.898 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.898 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.898 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:21.898 01:53:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:23.799 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:24.057 01:53:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.993 ************************************ 00:12:24.993 START TEST filesystem_in_capsule_ext4 00:12:24.993 ************************************ 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:24.993 mke2fs 1.47.0 (5-Feb-2023) 00:12:24.993 Discarding device blocks: 0/522240 done 00:12:24.993 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:24.993 Filesystem UUID: bb08776c-ff29-4c0e-b1d3-99b1e86838d9 00:12:24.993 Superblock backups stored on blocks: 00:12:24.993 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:24.993 00:12:24.993 Allocating group tables: 0/64 done 00:12:24.993 Writing inode tables: 0/64 done 00:12:24.993 Creating journal (8192 blocks): done 00:12:24.993 Writing superblocks and filesystem accounting information: 0/64 done 00:12:24.993 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:24.993 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3181094 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.252 00:12:25.252 real 0m0.221s 00:12:25.252 user 0m0.025s 00:12:25.252 sys 0m0.077s 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:25.252 ************************************ 00:12:25.252 END TEST filesystem_in_capsule_ext4 00:12:25.252 ************************************ 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.252 ************************************ 00:12:25.252 START TEST filesystem_in_capsule_btrfs 00:12:25.252 ************************************ 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:25.252 01:53:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.252 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.511 btrfs-progs v6.8.1 00:12:25.511 See https://btrfs.readthedocs.io for more information. 00:12:25.511 00:12:25.511 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.511 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.511 this does not affect your deployments: 00:12:25.511 - DUP for metadata (-m dup) 00:12:25.511 - enabled no-holes (-O no-holes) 00:12:25.511 - enabled free-space-tree (-R free-space-tree) 00:12:25.511 00:12:25.511 Label: (null) 00:12:25.511 UUID: fcf6e404-95ec-4ac7-99b0-385cdae8abd9 00:12:25.511 Node size: 16384 00:12:25.511 Sector size: 4096 (CPU page size: 4096) 00:12:25.511 Filesystem size: 510.00MiB 00:12:25.511 Block group profiles: 00:12:25.511 Data: single 8.00MiB 00:12:25.511 Metadata: DUP 32.00MiB 00:12:25.511 System: DUP 8.00MiB 00:12:25.511 SSD detected: yes 00:12:25.511 Zoned device: no 00:12:25.511 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.511 Checksum: crc32c 00:12:25.511 Number of devices: 1 00:12:25.511 Devices: 00:12:25.511 ID SIZE PATH 00:12:25.511 1 510.00MiB /dev/nvme0n1p1 00:12:25.511 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3181094 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.511 00:12:25.511 real 0m0.274s 00:12:25.511 user 0m0.033s 00:12:25.511 sys 0m0.132s 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.511 ************************************ 00:12:25.511 END TEST filesystem_in_capsule_btrfs 00:12:25.511 ************************************ 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.511 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 ************************************ 00:12:25.769 START TEST filesystem_in_capsule_xfs 00:12:25.769 ************************************ 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.769 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.769 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.769 = sectsz=512 attr=2, projid32bit=1 00:12:25.769 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.769 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.769 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.769 = sunit=0 swidth=0 blks 00:12:25.770 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.770 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.770 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.770 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:25.770 Discarding blocks...Done. 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3181094 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.770 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.028 00:12:26.028 real 0m0.233s 00:12:26.028 user 0m0.038s 00:12:26.028 sys 0m0.072s 00:12:26.028 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.028 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 ************************************ 00:12:26.028 END TEST filesystem_in_capsule_xfs 00:12:26.028 ************************************ 00:12:26.028 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:26.028 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:26.028 01:53:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3181094 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3181094 ']' 00:12:26.962 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3181094 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3181094 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3181094' 00:12:26.963 killing process with pid 3181094 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3181094 00:12:26.963 01:53:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3181094 00:12:30.248 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:30.249 00:12:30.249 real 0m9.827s 00:12:30.249 user 0m36.188s 00:12:30.249 sys 0m1.496s 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.249 ************************************ 00:12:30.249 END TEST nvmf_filesystem_in_capsule 00:12:30.249 ************************************ 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:30.249 rmmod nvme_rdma 00:12:30.249 rmmod nvme_fabrics 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:30.249 00:12:30.249 real 0m26.491s 00:12:30.249 user 1m14.492s 00:12:30.249 sys 0m7.968s 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.249 ************************************ 00:12:30.249 END TEST nvmf_filesystem 00:12:30.249 ************************************ 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.249 ************************************ 00:12:30.249 START TEST nvmf_target_discovery 00:12:30.249 ************************************ 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:30.249 * Looking for test storage... 00:12:30.249 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.249 --rc genhtml_branch_coverage=1 00:12:30.249 --rc genhtml_function_coverage=1 00:12:30.249 --rc genhtml_legend=1 00:12:30.249 --rc geninfo_all_blocks=1 00:12:30.249 --rc geninfo_unexecuted_blocks=1 00:12:30.249 00:12:30.249 ' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.249 --rc genhtml_branch_coverage=1 00:12:30.249 --rc genhtml_function_coverage=1 00:12:30.249 --rc genhtml_legend=1 00:12:30.249 --rc geninfo_all_blocks=1 00:12:30.249 --rc geninfo_unexecuted_blocks=1 00:12:30.249 00:12:30.249 ' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.249 --rc genhtml_branch_coverage=1 00:12:30.249 --rc genhtml_function_coverage=1 00:12:30.249 --rc genhtml_legend=1 00:12:30.249 --rc geninfo_all_blocks=1 00:12:30.249 --rc geninfo_unexecuted_blocks=1 00:12:30.249 00:12:30.249 ' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.249 --rc genhtml_branch_coverage=1 00:12:30.249 --rc genhtml_function_coverage=1 00:12:30.249 --rc genhtml_legend=1 00:12:30.249 --rc geninfo_all_blocks=1 00:12:30.249 --rc geninfo_unexecuted_blocks=1 00:12:30.249 00:12:30.249 ' 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:12:30.249 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.250 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.250 01:53:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.818 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:12:36.819 Found 0000:18:00.0 (0x8086 - 0x159b) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:12:36.819 Found 0000:18:00.1 (0x8086 - 0x159b) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@403 -- # modinfo irdma 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:12:36.819 Found net devices under 0000:18:00.0: cvl_0_0 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:12:36.819 Found net devices under 0000:18:00.1: cvl_0_1 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # rdma_device_init 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:36.819 01:53:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:36.819 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:36.819 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:12:36.819 altname enp24s0f0np0 00:12:36.819 altname ens785f0np0 00:12:36.819 inet 192.168.100.8/24 scope global cvl_0_0 00:12:36.819 valid_lft forever preferred_lft forever 00:12:36.819 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:12:36.819 valid_lft forever preferred_lft forever 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:36.819 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:36.819 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:36.819 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:12:36.819 altname enp24s0f1np1 00:12:36.819 altname ens785f1np1 00:12:36.819 inet 192.168.100.9/24 scope global cvl_0_1 00:12:36.819 valid_lft forever preferred_lft forever 00:12:36.820 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:12:36.820 valid_lft forever preferred_lft forever 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:36.820 192.168.100.9' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:36.820 192.168.100.9' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # head -n 1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # tail -n +2 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:36.820 192.168.100.9' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # head -n 1 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3185499 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3185499 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3185499 ']' 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.820 01:53:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.820 [2024-10-09 01:53:56.302546] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:12:36.820 [2024-10-09 01:53:56.302649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.820 [2024-10-09 01:53:56.438384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.820 [2024-10-09 01:53:56.629381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.820 [2024-10-09 01:53:56.629437] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.820 [2024-10-09 01:53:56.629450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.820 [2024-10-09 01:53:56.629464] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.820 [2024-10-09 01:53:56.629474] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.820 [2024-10-09 01:53:56.631931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.820 [2024-10-09 01:53:56.632009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.820 [2024-10-09 01:53:56.632081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.820 [2024-10-09 01:53:56.632087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 [2024-10-09 01:53:57.156624] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:37.388 [2024-10-09 01:53:57.166323] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:37.388 [2024-10-09 01:53:57.166356] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 Null1 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.388 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 [2024-10-09 01:53:57.226876] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 Null2 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 Null3 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 Null4 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.646 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.647 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 4420 00:12:37.906 00:12:37.906 Discovery Log Number of Records 6, Generation counter 6 00:12:37.906 =====Discovery Log Entry 0====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: current discovery subsystem 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4420 00:12:37.906 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: explicit discovery connections, duplicate discovery information 00:12:37.906 rdma_prtype: not specified 00:12:37.906 rdma_qptype: connected 00:12:37.906 rdma_cms: rdma-cm 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 =====Discovery Log Entry 1====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: nvme subsystem 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4420 00:12:37.906 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: none 00:12:37.906 rdma_prtype: not specified 00:12:37.906 rdma_qptype: connected 00:12:37.906 rdma_cms: rdma-cm 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 =====Discovery Log Entry 2====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: nvme subsystem 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4420 00:12:37.906 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: none 00:12:37.906 rdma_prtype: not specified 00:12:37.906 rdma_qptype: connected 00:12:37.906 rdma_cms: rdma-cm 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 =====Discovery Log Entry 3====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: nvme subsystem 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4420 00:12:37.906 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: none 00:12:37.906 rdma_prtype: not specified 00:12:37.906 rdma_qptype: connected 00:12:37.906 rdma_cms: rdma-cm 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 =====Discovery Log Entry 4====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: nvme subsystem 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4420 00:12:37.906 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: none 00:12:37.906 rdma_prtype: not specified 00:12:37.906 rdma_qptype: connected 00:12:37.906 rdma_cms: rdma-cm 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 =====Discovery Log Entry 5====== 00:12:37.906 trtype: rdma 00:12:37.906 adrfam: ipv4 00:12:37.906 subtype: discovery subsystem referral 00:12:37.906 treq: not required 00:12:37.906 portid: 0 00:12:37.906 trsvcid: 4430 00:12:37.906 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.906 traddr: 192.168.100.8 00:12:37.906 eflags: none 00:12:37.906 rdma_prtype: unrecognized 00:12:37.906 rdma_qptype: unrecognized 00:12:37.906 rdma_cms: unrecognized 00:12:37.906 rdma_pkey: 0x0000 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:37.906 Perform nvmf subsystem discovery via RPC 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.906 [ 00:12:37.906 { 00:12:37.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:37.906 "subtype": "Discovery", 00:12:37.906 "listen_addresses": [ 00:12:37.906 { 00:12:37.906 "trtype": "RDMA", 00:12:37.906 "adrfam": "IPv4", 00:12:37.906 "traddr": "192.168.100.8", 00:12:37.906 "trsvcid": "4420" 00:12:37.906 } 00:12:37.906 ], 00:12:37.906 "allow_any_host": true, 00:12:37.906 "hosts": [] 00:12:37.906 }, 00:12:37.906 { 00:12:37.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.906 "subtype": "NVMe", 00:12:37.906 "listen_addresses": [ 00:12:37.906 { 00:12:37.906 "trtype": "RDMA", 00:12:37.906 "adrfam": "IPv4", 00:12:37.906 "traddr": "192.168.100.8", 00:12:37.906 "trsvcid": "4420" 00:12:37.906 } 00:12:37.906 ], 00:12:37.906 "allow_any_host": true, 00:12:37.906 "hosts": [], 00:12:37.906 "serial_number": "SPDK00000000000001", 00:12:37.906 "model_number": "SPDK bdev Controller", 00:12:37.906 "max_namespaces": 32, 00:12:37.906 "min_cntlid": 1, 00:12:37.906 "max_cntlid": 65519, 00:12:37.906 "namespaces": [ 00:12:37.906 { 00:12:37.906 "nsid": 1, 00:12:37.906 "bdev_name": "Null1", 00:12:37.906 "name": "Null1", 00:12:37.906 "nguid": "3C4DDE13A869460FAB89F04CDEA4F010", 00:12:37.906 "uuid": "3c4dde13-a869-460f-ab89-f04cdea4f010" 00:12:37.906 } 00:12:37.906 ] 00:12:37.906 }, 00:12:37.906 { 00:12:37.906 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:37.906 "subtype": "NVMe", 00:12:37.906 "listen_addresses": [ 00:12:37.906 { 00:12:37.906 "trtype": "RDMA", 00:12:37.906 "adrfam": "IPv4", 00:12:37.906 "traddr": "192.168.100.8", 00:12:37.906 "trsvcid": "4420" 00:12:37.906 } 00:12:37.906 ], 00:12:37.906 "allow_any_host": true, 00:12:37.906 "hosts": [], 00:12:37.906 "serial_number": "SPDK00000000000002", 00:12:37.906 "model_number": "SPDK bdev Controller", 00:12:37.906 "max_namespaces": 32, 00:12:37.906 "min_cntlid": 1, 00:12:37.906 "max_cntlid": 65519, 00:12:37.906 "namespaces": [ 00:12:37.906 { 00:12:37.906 "nsid": 1, 00:12:37.906 "bdev_name": "Null2", 00:12:37.906 "name": "Null2", 00:12:37.906 "nguid": "E22B32B450A446EFB2428C059B5FCD24", 00:12:37.906 "uuid": "e22b32b4-50a4-46ef-b242-8c059b5fcd24" 00:12:37.906 } 00:12:37.906 ] 00:12:37.906 }, 00:12:37.906 { 00:12:37.906 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:37.906 "subtype": "NVMe", 00:12:37.906 "listen_addresses": [ 00:12:37.906 { 00:12:37.906 "trtype": "RDMA", 00:12:37.906 "adrfam": "IPv4", 00:12:37.906 "traddr": "192.168.100.8", 00:12:37.906 "trsvcid": "4420" 00:12:37.906 } 00:12:37.906 ], 00:12:37.906 "allow_any_host": true, 00:12:37.906 "hosts": [], 00:12:37.906 "serial_number": "SPDK00000000000003", 00:12:37.906 "model_number": "SPDK bdev Controller", 00:12:37.906 "max_namespaces": 32, 00:12:37.906 "min_cntlid": 1, 00:12:37.906 "max_cntlid": 65519, 00:12:37.906 "namespaces": [ 00:12:37.906 { 00:12:37.906 "nsid": 1, 00:12:37.906 "bdev_name": "Null3", 00:12:37.906 "name": "Null3", 00:12:37.906 "nguid": "F519711052EB427FACE2B6C6D3D38D94", 00:12:37.906 "uuid": "f5197110-52eb-427f-ace2-b6c6d3d38d94" 00:12:37.906 } 00:12:37.906 ] 00:12:37.906 }, 00:12:37.906 { 00:12:37.906 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:37.906 "subtype": "NVMe", 00:12:37.906 "listen_addresses": [ 00:12:37.906 { 00:12:37.906 "trtype": "RDMA", 00:12:37.906 "adrfam": "IPv4", 00:12:37.906 "traddr": "192.168.100.8", 00:12:37.906 "trsvcid": "4420" 00:12:37.906 } 00:12:37.906 ], 00:12:37.906 "allow_any_host": true, 00:12:37.906 "hosts": [], 00:12:37.906 "serial_number": "SPDK00000000000004", 00:12:37.906 "model_number": "SPDK bdev Controller", 00:12:37.906 "max_namespaces": 32, 00:12:37.906 "min_cntlid": 1, 00:12:37.906 "max_cntlid": 65519, 00:12:37.906 "namespaces": [ 00:12:37.906 { 00:12:37.906 "nsid": 1, 00:12:37.906 "bdev_name": "Null4", 00:12:37.906 "name": "Null4", 00:12:37.906 "nguid": "63E56D74AA4849DE87025C410D898BD7", 00:12:37.906 "uuid": "63e56d74-aa48-49de-8702-5c410d898bd7" 00:12:37.906 } 00:12:37.906 ] 00:12:37.906 } 00:12:37.906 ] 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.906 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:37.907 rmmod nvme_rdma 00:12:37.907 rmmod nvme_fabrics 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3185499 ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3185499 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3185499 ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3185499 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.907 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185499 00:12:38.166 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.166 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.166 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185499' 00:12:38.166 killing process with pid 3185499 00:12:38.166 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3185499 00:12:38.166 01:53:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3185499 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:39.541 00:12:39.541 real 0m9.488s 00:12:39.541 user 0m10.385s 00:12:39.541 sys 0m5.610s 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.541 ************************************ 00:12:39.541 END TEST nvmf_target_discovery 00:12:39.541 ************************************ 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.541 ************************************ 00:12:39.541 START TEST nvmf_referrals 00:12:39.541 ************************************ 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:39.541 * Looking for test storage... 00:12:39.541 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.541 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:39.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.542 --rc genhtml_branch_coverage=1 00:12:39.542 --rc genhtml_function_coverage=1 00:12:39.542 --rc genhtml_legend=1 00:12:39.542 --rc geninfo_all_blocks=1 00:12:39.542 --rc geninfo_unexecuted_blocks=1 00:12:39.542 00:12:39.542 ' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:39.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.542 --rc genhtml_branch_coverage=1 00:12:39.542 --rc genhtml_function_coverage=1 00:12:39.542 --rc genhtml_legend=1 00:12:39.542 --rc geninfo_all_blocks=1 00:12:39.542 --rc geninfo_unexecuted_blocks=1 00:12:39.542 00:12:39.542 ' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:39.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.542 --rc genhtml_branch_coverage=1 00:12:39.542 --rc genhtml_function_coverage=1 00:12:39.542 --rc genhtml_legend=1 00:12:39.542 --rc geninfo_all_blocks=1 00:12:39.542 --rc geninfo_unexecuted_blocks=1 00:12:39.542 00:12:39.542 ' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:39.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.542 --rc genhtml_branch_coverage=1 00:12:39.542 --rc genhtml_function_coverage=1 00:12:39.542 --rc genhtml_legend=1 00:12:39.542 --rc geninfo_all_blocks=1 00:12:39.542 --rc geninfo_unexecuted_blocks=1 00:12:39.542 00:12:39.542 ' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:39.542 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.801 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.801 01:53:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:12:46.361 Found 0000:18:00.0 (0x8086 - 0x159b) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:12:46.361 Found 0000:18:00.1 (0x8086 - 0x159b) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@403 -- # modinfo irdma 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:46.361 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:12:46.362 Found net devices under 0000:18:00.0: cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:12:46.362 Found net devices under 0000:18:00.1: cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # rdma_device_init 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:46.362 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:46.362 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:12:46.362 altname enp24s0f0np0 00:12:46.362 altname ens785f0np0 00:12:46.362 inet 192.168.100.8/24 scope global cvl_0_0 00:12:46.362 valid_lft forever preferred_lft forever 00:12:46.362 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:12:46.362 valid_lft forever preferred_lft forever 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:46.362 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:46.362 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:12:46.362 altname enp24s0f1np1 00:12:46.362 altname ens785f1np1 00:12:46.362 inet 192.168.100.9/24 scope global cvl_0_1 00:12:46.362 valid_lft forever preferred_lft forever 00:12:46.362 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:12:46.362 valid_lft forever preferred_lft forever 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:46.362 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:46.363 192.168.100.9' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # head -n 1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:46.363 192.168.100.9' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:46.363 192.168.100.9' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # tail -n +2 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # head -n 1 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3189400 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3189400 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3189400 ']' 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.363 01:54:05 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 [2024-10-09 01:54:06.022885] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:12:46.363 [2024-10-09 01:54:06.023007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.363 [2024-10-09 01:54:06.151917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.621 [2024-10-09 01:54:06.343725] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.621 [2024-10-09 01:54:06.343773] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.621 [2024-10-09 01:54:06.343802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.621 [2024-10-09 01:54:06.343817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.621 [2024-10-09 01:54:06.343827] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.621 [2024-10-09 01:54:06.346178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.621 [2024-10-09 01:54:06.346209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.621 [2024-10-09 01:54:06.346270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.621 [2024-10-09 01:54:06.346277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 [2024-10-09 01:54:06.906998] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:47.186 [2024-10-09 01:54:06.916772] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:47.186 [2024-10-09 01:54:06.916808] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 [2024-10-09 01:54:06.929157] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.186 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.187 01:54:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:47.187 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:47.187 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.445 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.704 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:47.962 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.220 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.221 01:54:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:48.221 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.479 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:48.739 rmmod nvme_rdma 00:12:48.739 rmmod nvme_fabrics 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3189400 ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3189400 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3189400 ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3189400 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189400 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189400' 00:12:48.739 killing process with pid 3189400 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3189400 00:12:48.739 01:54:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3189400 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:50.192 00:12:50.192 real 0m10.704s 00:12:50.192 user 0m15.529s 00:12:50.192 sys 0m6.064s 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.192 ************************************ 00:12:50.192 END TEST nvmf_referrals 00:12:50.192 ************************************ 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.192 ************************************ 00:12:50.192 START TEST nvmf_connect_disconnect 00:12:50.192 ************************************ 00:12:50.192 01:54:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:50.451 * Looking for test storage... 00:12:50.451 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.451 --rc genhtml_branch_coverage=1 00:12:50.451 --rc genhtml_function_coverage=1 00:12:50.451 --rc genhtml_legend=1 00:12:50.451 --rc geninfo_all_blocks=1 00:12:50.451 --rc geninfo_unexecuted_blocks=1 00:12:50.451 00:12:50.451 ' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.451 --rc genhtml_branch_coverage=1 00:12:50.451 --rc genhtml_function_coverage=1 00:12:50.451 --rc genhtml_legend=1 00:12:50.451 --rc geninfo_all_blocks=1 00:12:50.451 --rc geninfo_unexecuted_blocks=1 00:12:50.451 00:12:50.451 ' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.451 --rc genhtml_branch_coverage=1 00:12:50.451 --rc genhtml_function_coverage=1 00:12:50.451 --rc genhtml_legend=1 00:12:50.451 --rc geninfo_all_blocks=1 00:12:50.451 --rc geninfo_unexecuted_blocks=1 00:12:50.451 00:12:50.451 ' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.451 --rc genhtml_branch_coverage=1 00:12:50.451 --rc genhtml_function_coverage=1 00:12:50.451 --rc genhtml_legend=1 00:12:50.451 --rc geninfo_all_blocks=1 00:12:50.451 --rc geninfo_unexecuted_blocks=1 00:12:50.451 00:12:50.451 ' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.451 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.452 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.452 01:54:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:12:57.016 Found 0000:18:00.0 (0x8086 - 0x159b) 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.016 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:12:57.017 Found 0000:18:00.1 (0x8086 - 0x159b) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@403 -- # modinfo irdma 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:12:57.017 Found net devices under 0000:18:00.0: cvl_0_0 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:12:57.017 Found net devices under 0000:18:00.1: cvl_0_1 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.017 01:54:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:57.017 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:57.017 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:12:57.017 altname enp24s0f0np0 00:12:57.017 altname ens785f0np0 00:12:57.017 inet 192.168.100.8/24 scope global cvl_0_0 00:12:57.017 valid_lft forever preferred_lft forever 00:12:57.017 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:12:57.017 valid_lft forever preferred_lft forever 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:57.017 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:57.017 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:12:57.017 altname enp24s0f1np1 00:12:57.017 altname ens785f1np1 00:12:57.017 inet 192.168.100.9/24 scope global cvl_0_1 00:12:57.017 valid_lft forever preferred_lft forever 00:12:57.017 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:12:57.017 valid_lft forever preferred_lft forever 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.017 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:57.018 192.168.100.9' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:57.018 192.168.100.9' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:57.018 192.168.100.9' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3192777 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3192777 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3192777 ']' 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.018 01:54:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.018 [2024-10-09 01:54:16.226474] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:12:57.018 [2024-10-09 01:54:16.226596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.018 [2024-10-09 01:54:16.361638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.018 [2024-10-09 01:54:16.557278] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.018 [2024-10-09 01:54:16.557335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.018 [2024-10-09 01:54:16.557363] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.018 [2024-10-09 01:54:16.557381] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.018 [2024-10-09 01:54:16.557391] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.018 [2024-10-09 01:54:16.559722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.018 [2024-10-09 01:54:16.559791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.018 [2024-10-09 01:54:16.559892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.018 [2024-10-09 01:54:16.559898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.276 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.276 [2024-10-09 01:54:17.085306] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:57.534 [2024-10-09 01:54:17.102134] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:57.534 [2024-10-09 01:54:17.111913] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:57.534 [2024-10-09 01:54:17.111950] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.534 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.535 [2024-10-09 01:54:17.218387] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:57.535 01:54:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:00.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:30.242 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:30.243 rmmod nvme_rdma 00:17:30.243 rmmod nvme_fabrics 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3192777 ']' 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3192777 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3192777 ']' 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3192777 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.243 01:58:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192777 00:17:30.243 01:58:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.243 01:58:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.243 01:58:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192777' 00:17:30.243 killing process with pid 3192777 00:17:30.243 01:58:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3192777 00:17:30.243 01:58:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3192777 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:32.149 00:17:32.149 real 4m41.580s 00:17:32.149 user 18m17.377s 00:17:32.149 sys 0m19.509s 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:32.149 ************************************ 00:17:32.149 END TEST nvmf_connect_disconnect 00:17:32.149 ************************************ 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.149 ************************************ 00:17:32.149 START TEST nvmf_multitarget 00:17:32.149 ************************************ 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:32.149 * Looking for test storage... 00:17:32.149 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.149 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:32.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.150 --rc genhtml_branch_coverage=1 00:17:32.150 --rc genhtml_function_coverage=1 00:17:32.150 --rc genhtml_legend=1 00:17:32.150 --rc geninfo_all_blocks=1 00:17:32.150 --rc geninfo_unexecuted_blocks=1 00:17:32.150 00:17:32.150 ' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:32.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.150 --rc genhtml_branch_coverage=1 00:17:32.150 --rc genhtml_function_coverage=1 00:17:32.150 --rc genhtml_legend=1 00:17:32.150 --rc geninfo_all_blocks=1 00:17:32.150 --rc geninfo_unexecuted_blocks=1 00:17:32.150 00:17:32.150 ' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:32.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.150 --rc genhtml_branch_coverage=1 00:17:32.150 --rc genhtml_function_coverage=1 00:17:32.150 --rc genhtml_legend=1 00:17:32.150 --rc geninfo_all_blocks=1 00:17:32.150 --rc geninfo_unexecuted_blocks=1 00:17:32.150 00:17:32.150 ' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:32.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.150 --rc genhtml_branch_coverage=1 00:17:32.150 --rc genhtml_function_coverage=1 00:17:32.150 --rc genhtml_legend=1 00:17:32.150 --rc geninfo_all_blocks=1 00:17:32.150 --rc geninfo_unexecuted_blocks=1 00:17:32.150 00:17:32.150 ' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.150 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.150 01:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:17:38.723 Found 0000:18:00.0 (0x8086 - 0x159b) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:17:38.723 Found 0000:18:00.1 (0x8086 - 0x159b) 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:38.723 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@403 -- # modinfo irdma 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:17:38.724 Found net devices under 0000:18:00.0: cvl_0_0 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:17:38.724 Found net devices under 0000:18:00.1: cvl_0_1 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # rdma_device_init 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:38.724 01:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:17:38.724 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:38.724 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:17:38.724 altname enp24s0f0np0 00:17:38.724 altname ens785f0np0 00:17:38.724 inet 192.168.100.8/24 scope global cvl_0_0 00:17:38.724 valid_lft forever preferred_lft forever 00:17:38.724 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:17:38.724 valid_lft forever preferred_lft forever 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:17:38.724 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:38.724 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:17:38.724 altname enp24s0f1np1 00:17:38.724 altname ens785f1np1 00:17:38.724 inet 192.168.100.9/24 scope global cvl_0_1 00:17:38.724 valid_lft forever preferred_lft forever 00:17:38.724 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:17:38.724 valid_lft forever preferred_lft forever 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:38.724 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:38.725 192.168.100.9' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:38.725 192.168.100.9' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # head -n 1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # head -n 1 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:38.725 192.168.100.9' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # tail -n +2 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3232927 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3232927 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3232927 ']' 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.725 01:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.725 [2024-10-09 01:58:58.286047] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:17:38.725 [2024-10-09 01:58:58.286157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.725 [2024-10-09 01:58:58.415420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.984 [2024-10-09 01:58:58.617432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.984 [2024-10-09 01:58:58.617487] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.984 [2024-10-09 01:58:58.617501] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.984 [2024-10-09 01:58:58.617518] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.984 [2024-10-09 01:58:58.617529] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.984 [2024-10-09 01:58:58.623576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.984 [2024-10-09 01:58:58.623600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.984 [2024-10-09 01:58:58.623671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.984 [2024-10-09 01:58:58.623665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:39.553 "nvmf_tgt_1" 00:17:39.553 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:39.812 "nvmf_tgt_2" 00:17:39.812 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:39.812 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:39.812 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:39.812 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:40.071 true 00:17:40.071 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:40.071 true 00:17:40.071 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:40.071 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:40.331 rmmod nvme_rdma 00:17:40.331 rmmod nvme_fabrics 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3232927 ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3232927 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3232927 ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3232927 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.331 01:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3232927 00:17:40.331 01:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.331 01:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.331 01:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3232927' 00:17:40.331 killing process with pid 3232927 00:17:40.331 01:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3232927 00:17:40.331 01:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3232927 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:41.709 00:17:41.709 real 0m9.681s 00:17:41.709 user 0m12.313s 00:17:41.709 sys 0m5.582s 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.709 ************************************ 00:17:41.709 END TEST nvmf_multitarget 00:17:41.709 ************************************ 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.709 ************************************ 00:17:41.709 START TEST nvmf_rpc 00:17:41.709 ************************************ 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:41.709 * Looking for test storage... 00:17:41.709 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.709 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:41.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.968 --rc genhtml_branch_coverage=1 00:17:41.968 --rc genhtml_function_coverage=1 00:17:41.968 --rc genhtml_legend=1 00:17:41.968 --rc geninfo_all_blocks=1 00:17:41.968 --rc geninfo_unexecuted_blocks=1 00:17:41.968 00:17:41.968 ' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:41.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.968 --rc genhtml_branch_coverage=1 00:17:41.968 --rc genhtml_function_coverage=1 00:17:41.968 --rc genhtml_legend=1 00:17:41.968 --rc geninfo_all_blocks=1 00:17:41.968 --rc geninfo_unexecuted_blocks=1 00:17:41.968 00:17:41.968 ' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:41.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.968 --rc genhtml_branch_coverage=1 00:17:41.968 --rc genhtml_function_coverage=1 00:17:41.968 --rc genhtml_legend=1 00:17:41.968 --rc geninfo_all_blocks=1 00:17:41.968 --rc geninfo_unexecuted_blocks=1 00:17:41.968 00:17:41.968 ' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:41.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.968 --rc genhtml_branch_coverage=1 00:17:41.968 --rc genhtml_function_coverage=1 00:17:41.968 --rc genhtml_legend=1 00:17:41.968 --rc geninfo_all_blocks=1 00:17:41.968 --rc geninfo_unexecuted_blocks=1 00:17:41.968 00:17:41.968 ' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.968 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.969 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.969 01:59:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.547 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:17:48.548 Found 0000:18:00.0 (0x8086 - 0x159b) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:17:48.548 Found 0000:18:00.1 (0x8086 - 0x159b) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@403 -- # modinfo irdma 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:17:48.548 Found net devices under 0000:18:00.0: cvl_0_0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:17:48.548 Found net devices under 0000:18:00.1: cvl_0_1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # rdma_device_init 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:17:48.548 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:48.548 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:17:48.548 altname enp24s0f0np0 00:17:48.548 altname ens785f0np0 00:17:48.548 inet 192.168.100.8/24 scope global cvl_0_0 00:17:48.548 valid_lft forever preferred_lft forever 00:17:48.548 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:17:48.548 valid_lft forever preferred_lft forever 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:17:48.548 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:48.548 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:17:48.548 altname enp24s0f1np1 00:17:48.548 altname ens785f1np1 00:17:48.548 inet 192.168.100.9/24 scope global cvl_0_1 00:17:48.548 valid_lft forever preferred_lft forever 00:17:48.548 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:17:48.548 valid_lft forever preferred_lft forever 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:48.548 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:48.549 192.168.100.9' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:48.549 192.168.100.9' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # head -n 1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:48.549 192.168.100.9' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # tail -n +2 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # head -n 1 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3236224 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3236224 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3236224 ']' 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.549 01:59:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.549 [2024-10-09 01:59:08.072052] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:17:48.549 [2024-10-09 01:59:08.072160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.549 [2024-10-09 01:59:08.210381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.808 [2024-10-09 01:59:08.417481] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.808 [2024-10-09 01:59:08.417541] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.808 [2024-10-09 01:59:08.417555] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.808 [2024-10-09 01:59:08.417571] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.808 [2024-10-09 01:59:08.417582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.808 [2024-10-09 01:59:08.420020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.808 [2024-10-09 01:59:08.420087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.808 [2024-10-09 01:59:08.420110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.808 [2024-10-09 01:59:08.420108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.068 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.068 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:49.068 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:49.068 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:49.068 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:49.327 "tick_rate": 2300000000, 00:17:49.327 "poll_groups": [ 00:17:49.327 { 00:17:49.327 "name": "nvmf_tgt_poll_group_000", 00:17:49.327 "admin_qpairs": 0, 00:17:49.327 "io_qpairs": 0, 00:17:49.327 "current_admin_qpairs": 0, 00:17:49.327 "current_io_qpairs": 0, 00:17:49.327 "pending_bdev_io": 0, 00:17:49.327 "completed_nvme_io": 0, 00:17:49.327 "transports": [] 00:17:49.327 }, 00:17:49.327 { 00:17:49.327 "name": "nvmf_tgt_poll_group_001", 00:17:49.327 "admin_qpairs": 0, 00:17:49.327 "io_qpairs": 0, 00:17:49.327 "current_admin_qpairs": 0, 00:17:49.327 "current_io_qpairs": 0, 00:17:49.327 "pending_bdev_io": 0, 00:17:49.327 "completed_nvme_io": 0, 00:17:49.327 "transports": [] 00:17:49.327 }, 00:17:49.327 { 00:17:49.327 "name": "nvmf_tgt_poll_group_002", 00:17:49.327 "admin_qpairs": 0, 00:17:49.327 "io_qpairs": 0, 00:17:49.327 "current_admin_qpairs": 0, 00:17:49.327 "current_io_qpairs": 0, 00:17:49.327 "pending_bdev_io": 0, 00:17:49.327 "completed_nvme_io": 0, 00:17:49.327 "transports": [] 00:17:49.327 }, 00:17:49.327 { 00:17:49.327 "name": "nvmf_tgt_poll_group_003", 00:17:49.327 "admin_qpairs": 0, 00:17:49.327 "io_qpairs": 0, 00:17:49.327 "current_admin_qpairs": 0, 00:17:49.327 "current_io_qpairs": 0, 00:17:49.327 "pending_bdev_io": 0, 00:17:49.327 "completed_nvme_io": 0, 00:17:49.327 "transports": [] 00:17:49.327 } 00:17:49.327 ] 00:17:49.327 }' 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:49.327 01:59:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.327 [2024-10-09 01:59:09.063978] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:17:49.327 [2024-10-09 01:59:09.073798] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:17:49.327 [2024-10-09 01:59:09.073833] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.327 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:49.327 "tick_rate": 2300000000, 00:17:49.327 "poll_groups": [ 00:17:49.327 { 00:17:49.327 "name": "nvmf_tgt_poll_group_000", 00:17:49.327 "admin_qpairs": 0, 00:17:49.327 "io_qpairs": 0, 00:17:49.327 "current_admin_qpairs": 0, 00:17:49.327 "current_io_qpairs": 0, 00:17:49.327 "pending_bdev_io": 0, 00:17:49.327 "completed_nvme_io": 0, 00:17:49.327 "transports": [ 00:17:49.327 { 00:17:49.327 "trtype": "RDMA", 00:17:49.327 "pending_data_buffer": 0, 00:17:49.327 "devices": [ 00:17:49.327 { 00:17:49.327 "name": "rocep24s0f0", 00:17:49.327 "polls": 1440, 00:17:49.327 "idle_polls": 1440, 00:17:49.327 "completions": 0, 00:17:49.327 "requests": 0, 00:17:49.327 "request_latency": 0, 00:17:49.327 "pending_free_request": 0, 00:17:49.327 "pending_rdma_read": 0, 00:17:49.327 "pending_rdma_write": 0, 00:17:49.327 "pending_rdma_send": 0, 00:17:49.327 "total_send_wrs": 0, 00:17:49.327 "send_doorbell_updates": 0, 00:17:49.327 "total_recv_wrs": 0, 00:17:49.327 "recv_doorbell_updates": 0 00:17:49.327 }, 00:17:49.327 { 00:17:49.327 "name": "rocep24s0f1", 00:17:49.327 "polls": 1440, 00:17:49.327 "idle_polls": 1440, 00:17:49.327 "completions": 0, 00:17:49.327 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "nvmf_tgt_poll_group_001", 00:17:49.328 "admin_qpairs": 0, 00:17:49.328 "io_qpairs": 0, 00:17:49.328 "current_admin_qpairs": 0, 00:17:49.328 "current_io_qpairs": 0, 00:17:49.328 "pending_bdev_io": 0, 00:17:49.328 "completed_nvme_io": 0, 00:17:49.328 "transports": [ 00:17:49.328 { 00:17:49.328 "trtype": "RDMA", 00:17:49.328 "pending_data_buffer": 0, 00:17:49.328 "devices": [ 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f0", 00:17:49.328 "polls": 1302, 00:17:49.328 "idle_polls": 1302, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f1", 00:17:49.328 "polls": 1302, 00:17:49.328 "idle_polls": 1302, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "nvmf_tgt_poll_group_002", 00:17:49.328 "admin_qpairs": 0, 00:17:49.328 "io_qpairs": 0, 00:17:49.328 "current_admin_qpairs": 0, 00:17:49.328 "current_io_qpairs": 0, 00:17:49.328 "pending_bdev_io": 0, 00:17:49.328 "completed_nvme_io": 0, 00:17:49.328 "transports": [ 00:17:49.328 { 00:17:49.328 "trtype": "RDMA", 00:17:49.328 "pending_data_buffer": 0, 00:17:49.328 "devices": [ 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f0", 00:17:49.328 "polls": 1222, 00:17:49.328 "idle_polls": 1222, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f1", 00:17:49.328 "polls": 1222, 00:17:49.328 "idle_polls": 1222, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "nvmf_tgt_poll_group_003", 00:17:49.328 "admin_qpairs": 0, 00:17:49.328 "io_qpairs": 0, 00:17:49.328 "current_admin_qpairs": 0, 00:17:49.328 "current_io_qpairs": 0, 00:17:49.328 "pending_bdev_io": 0, 00:17:49.328 "completed_nvme_io": 0, 00:17:49.328 "transports": [ 00:17:49.328 { 00:17:49.328 "trtype": "RDMA", 00:17:49.328 "pending_data_buffer": 0, 00:17:49.328 "devices": [ 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f0", 00:17:49.328 "polls": 847, 00:17:49.328 "idle_polls": 847, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 }, 00:17:49.328 { 00:17:49.328 "name": "rocep24s0f1", 00:17:49.328 "polls": 847, 00:17:49.328 "idle_polls": 847, 00:17:49.328 "completions": 0, 00:17:49.328 "requests": 0, 00:17:49.328 "request_latency": 0, 00:17:49.328 "pending_free_request": 0, 00:17:49.328 "pending_rdma_read": 0, 00:17:49.328 "pending_rdma_write": 0, 00:17:49.328 "pending_rdma_send": 0, 00:17:49.328 "total_send_wrs": 0, 00:17:49.328 "send_doorbell_updates": 0, 00:17:49.328 "total_recv_wrs": 0, 00:17:49.328 "recv_doorbell_updates": 0 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 } 00:17:49.328 ] 00:17:49.328 }' 00:17:49.328 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:49.328 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:49.328 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:49.328 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.588 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.847 Malloc1 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.847 [2024-10-09 01:59:09.461410] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:49.847 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -a 192.168.100.8 -s 4420 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -a 192.168.100.8 -s 4420 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -a 192.168.100.8 -s 4420 00:17:49.848 [2024-10-09 01:59:09.504861] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712' 00:17:49.848 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:49.848 could not add new controller: failed to write to nvme-fabrics device 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.848 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:50.107 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:50.107 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:50.107 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.107 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:50.107 01:59:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:52.013 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:52.013 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:52.013 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:52.271 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:52.271 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.271 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:52.272 01:59:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:53.209 [2024-10-09 01:59:12.774437] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712' 00:17:53.209 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:53.209 could not add new controller: failed to write to nvme-fabrics device 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.209 01:59:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:53.469 01:59:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:53.469 01:59:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:53.469 01:59:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.469 01:59:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:53.469 01:59:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:55.374 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.311 01:59:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 [2024-10-09 01:59:16.037941] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.311 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:56.569 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.569 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:56.569 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.569 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:56.569 01:59:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:59.105 01:59:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:59.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 [2024-10-09 01:59:19.260593] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.674 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:59.933 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.933 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:59.933 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.933 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:59.933 01:59:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:01.838 01:59:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 [2024-10-09 01:59:22.441136] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:03.036 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.036 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:03.036 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.036 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:03.036 01:59:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:04.939 01:59:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 [2024-10-09 01:59:25.656946] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.885 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:06.225 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:06.225 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:06.225 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.225 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:06.225 01:59:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:08.204 01:59:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 [2024-10-09 01:59:28.858690] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.143 01:59:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:09.402 01:59:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:09.402 01:59:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:09.402 01:59:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.402 01:59:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:09.402 01:59:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:11.308 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:11.308 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:11.308 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.567 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:11.567 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.567 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:11.567 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.504 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:12.504 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:12.504 01:59:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 [2024-10-09 01:59:32.084402] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.504 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 [2024-10-09 01:59:32.136532] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 [2024-10-09 01:59:32.188725] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 [2024-10-09 01:59:32.240938] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 [2024-10-09 01:59:32.293115] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.505 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.765 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:12.765 "tick_rate": 2300000000, 00:18:12.765 "poll_groups": [ 00:18:12.765 { 00:18:12.765 "name": "nvmf_tgt_poll_group_000", 00:18:12.765 "admin_qpairs": 2, 00:18:12.765 "io_qpairs": 27, 00:18:12.765 "current_admin_qpairs": 0, 00:18:12.765 "current_io_qpairs": 0, 00:18:12.765 "pending_bdev_io": 0, 00:18:12.765 "completed_nvme_io": 51, 00:18:12.765 "transports": [ 00:18:12.765 { 00:18:12.765 "trtype": "RDMA", 00:18:12.765 "pending_data_buffer": 0, 00:18:12.765 "devices": [ 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f0", 00:18:12.765 "polls": 2720833, 00:18:12.765 "idle_polls": 2720517, 00:18:12.765 "completions": 3759, 00:18:12.765 "requests": 3654, 00:18:12.765 "request_latency": 518504204, 00:18:12.765 "pending_free_request": 0, 00:18:12.765 "pending_rdma_read": 0, 00:18:12.765 "pending_rdma_write": 0, 00:18:12.765 "pending_rdma_send": 0, 00:18:12.765 "total_send_wrs": 154, 00:18:12.765 "send_doorbell_updates": 103, 00:18:12.765 "total_recv_wrs": 3654, 00:18:12.765 "recv_doorbell_updates": 130 00:18:12.765 }, 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f1", 00:18:12.765 "polls": 2720833, 00:18:12.765 "idle_polls": 2720833, 00:18:12.765 "completions": 0, 00:18:12.765 "requests": 0, 00:18:12.765 "request_latency": 0, 00:18:12.765 "pending_free_request": 0, 00:18:12.765 "pending_rdma_read": 0, 00:18:12.765 "pending_rdma_write": 0, 00:18:12.765 "pending_rdma_send": 0, 00:18:12.765 "total_send_wrs": 0, 00:18:12.765 "send_doorbell_updates": 0, 00:18:12.765 "total_recv_wrs": 0, 00:18:12.765 "recv_doorbell_updates": 0 00:18:12.765 } 00:18:12.765 ] 00:18:12.765 } 00:18:12.765 ] 00:18:12.765 }, 00:18:12.765 { 00:18:12.765 "name": "nvmf_tgt_poll_group_001", 00:18:12.765 "admin_qpairs": 2, 00:18:12.765 "io_qpairs": 26, 00:18:12.765 "current_admin_qpairs": 0, 00:18:12.765 "current_io_qpairs": 0, 00:18:12.765 "pending_bdev_io": 0, 00:18:12.765 "completed_nvme_io": 110, 00:18:12.765 "transports": [ 00:18:12.765 { 00:18:12.765 "trtype": "RDMA", 00:18:12.765 "pending_data_buffer": 0, 00:18:12.765 "devices": [ 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f0", 00:18:12.765 "polls": 2626990, 00:18:12.765 "idle_polls": 2626597, 00:18:12.765 "completions": 3710, 00:18:12.765 "requests": 3550, 00:18:12.765 "request_latency": 520445826, 00:18:12.765 "pending_free_request": 0, 00:18:12.765 "pending_rdma_read": 0, 00:18:12.765 "pending_rdma_write": 0, 00:18:12.765 "pending_rdma_send": 0, 00:18:12.765 "total_send_wrs": 268, 00:18:12.765 "send_doorbell_updates": 135, 00:18:12.765 "total_recv_wrs": 3550, 00:18:12.765 "recv_doorbell_updates": 161 00:18:12.765 }, 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f1", 00:18:12.765 "polls": 2626990, 00:18:12.765 "idle_polls": 2626990, 00:18:12.765 "completions": 0, 00:18:12.765 "requests": 0, 00:18:12.765 "request_latency": 0, 00:18:12.765 "pending_free_request": 0, 00:18:12.765 "pending_rdma_read": 0, 00:18:12.765 "pending_rdma_write": 0, 00:18:12.765 "pending_rdma_send": 0, 00:18:12.765 "total_send_wrs": 0, 00:18:12.765 "send_doorbell_updates": 0, 00:18:12.765 "total_recv_wrs": 0, 00:18:12.765 "recv_doorbell_updates": 0 00:18:12.765 } 00:18:12.765 ] 00:18:12.765 } 00:18:12.765 ] 00:18:12.765 }, 00:18:12.765 { 00:18:12.765 "name": "nvmf_tgt_poll_group_002", 00:18:12.765 "admin_qpairs": 1, 00:18:12.765 "io_qpairs": 26, 00:18:12.765 "current_admin_qpairs": 0, 00:18:12.765 "current_io_qpairs": 0, 00:18:12.765 "pending_bdev_io": 0, 00:18:12.765 "completed_nvme_io": 177, 00:18:12.765 "transports": [ 00:18:12.765 { 00:18:12.765 "trtype": "RDMA", 00:18:12.765 "pending_data_buffer": 0, 00:18:12.765 "devices": [ 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f0", 00:18:12.765 "polls": 2638786, 00:18:12.765 "idle_polls": 2638322, 00:18:12.765 "completions": 3798, 00:18:12.765 "requests": 3594, 00:18:12.765 "request_latency": 543023732, 00:18:12.765 "pending_free_request": 0, 00:18:12.765 "pending_rdma_read": 0, 00:18:12.765 "pending_rdma_write": 0, 00:18:12.765 "pending_rdma_send": 0, 00:18:12.765 "total_send_wrs": 367, 00:18:12.765 "send_doorbell_updates": 171, 00:18:12.765 "total_recv_wrs": 3594, 00:18:12.765 "recv_doorbell_updates": 197 00:18:12.765 }, 00:18:12.765 { 00:18:12.765 "name": "rocep24s0f1", 00:18:12.765 "polls": 2638786, 00:18:12.765 "idle_polls": 2638786, 00:18:12.765 "completions": 0, 00:18:12.765 "requests": 0, 00:18:12.765 "request_latency": 0, 00:18:12.766 "pending_free_request": 0, 00:18:12.766 "pending_rdma_read": 0, 00:18:12.766 "pending_rdma_write": 0, 00:18:12.766 "pending_rdma_send": 0, 00:18:12.766 "total_send_wrs": 0, 00:18:12.766 "send_doorbell_updates": 0, 00:18:12.766 "total_recv_wrs": 0, 00:18:12.766 "recv_doorbell_updates": 0 00:18:12.766 } 00:18:12.766 ] 00:18:12.766 } 00:18:12.766 ] 00:18:12.766 }, 00:18:12.766 { 00:18:12.766 "name": "nvmf_tgt_poll_group_003", 00:18:12.766 "admin_qpairs": 2, 00:18:12.766 "io_qpairs": 26, 00:18:12.766 "current_admin_qpairs": 0, 00:18:12.766 "current_io_qpairs": 0, 00:18:12.766 "pending_bdev_io": 0, 00:18:12.766 "completed_nvme_io": 117, 00:18:12.766 "transports": [ 00:18:12.766 { 00:18:12.766 "trtype": "RDMA", 00:18:12.766 "pending_data_buffer": 0, 00:18:12.766 "devices": [ 00:18:12.766 { 00:18:12.766 "name": "rocep24s0f0", 00:18:12.766 "polls": 2061210, 00:18:12.766 "idle_polls": 2060808, 00:18:12.766 "completions": 3724, 00:18:12.766 "requests": 3557, 00:18:12.766 "request_latency": 513093420, 00:18:12.766 "pending_free_request": 0, 00:18:12.766 "pending_rdma_read": 0, 00:18:12.766 "pending_rdma_write": 0, 00:18:12.766 "pending_rdma_send": 0, 00:18:12.766 "total_send_wrs": 282, 00:18:12.766 "send_doorbell_updates": 142, 00:18:12.766 "total_recv_wrs": 3557, 00:18:12.766 "recv_doorbell_updates": 168 00:18:12.766 }, 00:18:12.766 { 00:18:12.766 "name": "rocep24s0f1", 00:18:12.766 "polls": 2061210, 00:18:12.766 "idle_polls": 2061210, 00:18:12.766 "completions": 0, 00:18:12.766 "requests": 0, 00:18:12.766 "request_latency": 0, 00:18:12.766 "pending_free_request": 0, 00:18:12.766 "pending_rdma_read": 0, 00:18:12.766 "pending_rdma_write": 0, 00:18:12.766 "pending_rdma_send": 0, 00:18:12.766 "total_send_wrs": 0, 00:18:12.766 "send_doorbell_updates": 0, 00:18:12.766 "total_recv_wrs": 0, 00:18:12.766 "recv_doorbell_updates": 0 00:18:12.766 } 00:18:12.766 ] 00:18:12.766 } 00:18:12.766 ] 00:18:12.766 } 00:18:12.766 ] 00:18:12.766 }' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 14991 > 0 )) 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 2095067182 > 0 )) 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.766 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:12.766 rmmod nvme_rdma 00:18:13.026 rmmod nvme_fabrics 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3236224 ']' 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3236224 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3236224 ']' 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3236224 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3236224 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3236224' 00:18:13.026 killing process with pid 3236224 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3236224 00:18:13.026 01:59:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3236224 00:18:14.405 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:14.405 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:14.405 00:18:14.405 real 0m32.843s 00:18:14.405 user 1m43.889s 00:18:14.405 sys 0m7.087s 00:18:14.405 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.405 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.405 ************************************ 00:18:14.405 END TEST nvmf_rpc 00:18:14.405 ************************************ 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.665 ************************************ 00:18:14.665 START TEST nvmf_invalid 00:18:14.665 ************************************ 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:14.665 * Looking for test storage... 00:18:14.665 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:14.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.665 --rc genhtml_branch_coverage=1 00:18:14.665 --rc genhtml_function_coverage=1 00:18:14.665 --rc genhtml_legend=1 00:18:14.665 --rc geninfo_all_blocks=1 00:18:14.665 --rc geninfo_unexecuted_blocks=1 00:18:14.665 00:18:14.665 ' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:14.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.665 --rc genhtml_branch_coverage=1 00:18:14.665 --rc genhtml_function_coverage=1 00:18:14.665 --rc genhtml_legend=1 00:18:14.665 --rc geninfo_all_blocks=1 00:18:14.665 --rc geninfo_unexecuted_blocks=1 00:18:14.665 00:18:14.665 ' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:14.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.665 --rc genhtml_branch_coverage=1 00:18:14.665 --rc genhtml_function_coverage=1 00:18:14.665 --rc genhtml_legend=1 00:18:14.665 --rc geninfo_all_blocks=1 00:18:14.665 --rc geninfo_unexecuted_blocks=1 00:18:14.665 00:18:14.665 ' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:14.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.665 --rc genhtml_branch_coverage=1 00:18:14.665 --rc genhtml_function_coverage=1 00:18:14.665 --rc genhtml_legend=1 00:18:14.665 --rc geninfo_all_blocks=1 00:18:14.665 --rc geninfo_unexecuted_blocks=1 00:18:14.665 00:18:14.665 ' 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.665 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.666 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.666 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:14.688 01:59:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:18:21.259 Found 0000:18:00.0 (0x8086 - 0x159b) 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.259 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:18:21.260 Found 0000:18:00.1 (0x8086 - 0x159b) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@403 -- # modinfo irdma 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:18:21.260 Found net devices under 0000:18:00.0: cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:18:21.260 Found net devices under 0000:18:00.1: cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # rdma_device_init 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:21.260 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:21.260 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:18:21.260 altname enp24s0f0np0 00:18:21.260 altname ens785f0np0 00:18:21.260 inet 192.168.100.8/24 scope global cvl_0_0 00:18:21.260 valid_lft forever preferred_lft forever 00:18:21.260 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:18:21.260 valid_lft forever preferred_lft forever 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:21.260 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:21.260 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:18:21.260 altname enp24s0f1np1 00:18:21.260 altname ens785f1np1 00:18:21.260 inet 192.168.100.9/24 scope global cvl_0_1 00:18:21.260 valid_lft forever preferred_lft forever 00:18:21.260 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:18:21.260 valid_lft forever preferred_lft forever 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:21.260 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:21.261 192.168.100.9' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # head -n 1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:21.261 192.168.100.9' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:21.261 192.168.100.9' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # tail -n +2 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # head -n 1 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3242648 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3242648 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3242648 ']' 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:21.261 01:59:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:21.261 [2024-10-09 01:59:40.554519] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:18:21.261 [2024-10-09 01:59:40.554635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.261 [2024-10-09 01:59:40.681497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.261 [2024-10-09 01:59:40.874884] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.261 [2024-10-09 01:59:40.874936] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.261 [2024-10-09 01:59:40.874954] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.261 [2024-10-09 01:59:40.874967] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.261 [2024-10-09 01:59:40.874976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.261 [2024-10-09 01:59:40.877176] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.261 [2024-10-09 01:59:40.877245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.261 [2024-10-09 01:59:40.877305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.261 [2024-10-09 01:59:40.877311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.829 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.829 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7963 00:18:21.830 [2024-10-09 01:59:41.614362] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:21.830 { 00:18:21.830 "nqn": "nqn.2016-06.io.spdk:cnode7963", 00:18:21.830 "tgt_name": "foobar", 00:18:21.830 "method": "nvmf_create_subsystem", 00:18:21.830 "req_id": 1 00:18:21.830 } 00:18:21.830 Got JSON-RPC error response 00:18:21.830 response: 00:18:21.830 { 00:18:21.830 "code": -32603, 00:18:21.830 "message": "Unable to find target foobar" 00:18:21.830 }' 00:18:21.830 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:21.830 { 00:18:21.830 "nqn": "nqn.2016-06.io.spdk:cnode7963", 00:18:21.830 "tgt_name": "foobar", 00:18:21.830 "method": "nvmf_create_subsystem", 00:18:21.830 "req_id": 1 00:18:21.830 } 00:18:21.830 Got JSON-RPC error response 00:18:21.830 response: 00:18:21.830 { 00:18:21.830 "code": -32603, 00:18:21.830 "message": "Unable to find target foobar" 00:18:21.830 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18244 00:18:22.089 [2024-10-09 01:59:41.827108] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18244: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:22.089 { 00:18:22.089 "nqn": "nqn.2016-06.io.spdk:cnode18244", 00:18:22.089 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:22.089 "method": "nvmf_create_subsystem", 00:18:22.089 "req_id": 1 00:18:22.089 } 00:18:22.089 Got JSON-RPC error response 00:18:22.089 response: 00:18:22.089 { 00:18:22.089 "code": -32602, 00:18:22.089 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:22.089 }' 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:22.089 { 00:18:22.089 "nqn": "nqn.2016-06.io.spdk:cnode18244", 00:18:22.089 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:22.089 "method": "nvmf_create_subsystem", 00:18:22.089 "req_id": 1 00:18:22.089 } 00:18:22.089 Got JSON-RPC error response 00:18:22.089 response: 00:18:22.089 { 00:18:22.089 "code": -32602, 00:18:22.089 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:22.089 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:22.089 01:59:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23080 00:18:22.349 [2024-10-09 01:59:42.027794] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23080: invalid model number 'SPDK_Controller' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:22.349 { 00:18:22.349 "nqn": "nqn.2016-06.io.spdk:cnode23080", 00:18:22.349 "model_number": "SPDK_Controller\u001f", 00:18:22.349 "method": "nvmf_create_subsystem", 00:18:22.349 "req_id": 1 00:18:22.349 } 00:18:22.349 Got JSON-RPC error response 00:18:22.349 response: 00:18:22.349 { 00:18:22.349 "code": -32602, 00:18:22.349 "message": "Invalid MN SPDK_Controller\u001f" 00:18:22.349 }' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:22.349 { 00:18:22.349 "nqn": "nqn.2016-06.io.spdk:cnode23080", 00:18:22.349 "model_number": "SPDK_Controller\u001f", 00:18:22.349 "method": "nvmf_create_subsystem", 00:18:22.349 "req_id": 1 00:18:22.349 } 00:18:22.349 Got JSON-RPC error response 00:18:22.349 response: 00:18:22.349 { 00:18:22.349 "code": -32602, 00:18:22.349 "message": "Invalid MN SPDK_Controller\u001f" 00:18:22.349 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.349 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'pt)4$|w}aQ@^U!Kp])/]Z' 00:18:22.609 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'pt)4$|w}aQ@^U!Kp])/]Z' nqn.2016-06.io.spdk:cnode32265 00:18:22.609 [2024-10-09 01:59:42.401077] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32265: invalid serial number 'pt)4$|w}aQ@^U!Kp])/]Z' 00:18:22.869 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:22.869 { 00:18:22.869 "nqn": "nqn.2016-06.io.spdk:cnode32265", 00:18:22.869 "serial_number": "pt)4$|w}aQ@^U!Kp])/]Z", 00:18:22.869 "method": "nvmf_create_subsystem", 00:18:22.869 "req_id": 1 00:18:22.869 } 00:18:22.869 Got JSON-RPC error response 00:18:22.869 response: 00:18:22.869 { 00:18:22.869 "code": -32602, 00:18:22.869 "message": "Invalid SN pt)4$|w}aQ@^U!Kp])/]Z" 00:18:22.869 }' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:22.870 { 00:18:22.870 "nqn": "nqn.2016-06.io.spdk:cnode32265", 00:18:22.870 "serial_number": "pt)4$|w}aQ@^U!Kp])/]Z", 00:18:22.870 "method": "nvmf_create_subsystem", 00:18:22.870 "req_id": 1 00:18:22.870 } 00:18:22.870 Got JSON-RPC error response 00:18:22.870 response: 00:18:22.870 { 00:18:22.870 "code": -32602, 00:18:22.870 "message": "Invalid SN pt)4$|w}aQ@^U!Kp])/]Z" 00:18:22.870 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:22.870 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:22.871 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'QRSk+L.$@CH@zup$t0oyG<1,,$n"~=IY7yIh-k3[' 00:18:23.131 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'QRSk+L.$@CH@zup$t0oyG<1,,$n"~=IY7yIh-k3[' nqn.2016-06.io.spdk:cnode15727 00:18:23.131 [2024-10-09 01:59:42.934934] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15727: invalid model number 'QRSk+L.$@CH@zup$t0oyG<1,,$n"~=IY7yIh-k3[' 00:18:23.390 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:23.390 { 00:18:23.390 "nqn": "nqn.2016-06.io.spdk:cnode15727", 00:18:23.390 "model_number": "QRSk+L.$@CH@zup$\u007ft0oyG<1,,$n\"~=IY7yIh-k3[", 00:18:23.390 "method": "nvmf_create_subsystem", 00:18:23.390 "req_id": 1 00:18:23.390 } 00:18:23.390 Got JSON-RPC error response 00:18:23.390 response: 00:18:23.390 { 00:18:23.390 "code": -32602, 00:18:23.390 "message": "Invalid MN QRSk+L.$@CH@zup$\u007ft0oyG<1,,$n\"~=IY7yIh-k3[" 00:18:23.390 }' 00:18:23.390 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:23.390 { 00:18:23.390 "nqn": "nqn.2016-06.io.spdk:cnode15727", 00:18:23.390 "model_number": "QRSk+L.$@CH@zup$\u007ft0oyG<1,,$n\"~=IY7yIh-k3[", 00:18:23.390 "method": "nvmf_create_subsystem", 00:18:23.390 "req_id": 1 00:18:23.390 } 00:18:23.390 Got JSON-RPC error response 00:18:23.390 response: 00:18:23.390 { 00:18:23.390 "code": -32602, 00:18:23.390 "message": "Invalid MN QRSk+L.$@CH@zup$\u007ft0oyG<1,,$n\"~=IY7yIh-k3[" 00:18:23.390 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:23.390 01:59:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:18:23.390 [2024-10-09 01:59:43.152600] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:18:23.390 [2024-10-09 01:59:43.162326] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:18:23.390 [2024-10-09 01:59:43.162369] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:23.390 [2024-10-09 01:59:43.165451] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:18:23.390 [2024-10-09 01:59:43.165488] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:18:23.390 [2024-10-09 01:59:43.166051] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:18:23.390 [2024-10-09 01:59:43.167427] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:18:23.390 [2024-10-09 01:59:43.167461] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:18:23.390 [2024-10-09 01:59:43.168020] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:18:23.390 [2024-10-09 01:59:43.169267] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:18:23.390 [2024-10-09 01:59:43.169300] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:18:23.390 [2024-10-09 01:59:43.169853] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:18:23.391 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:23.650 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:18:23.650 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:18:23.650 192.168.100.9' 00:18:23.650 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:23.650 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:18:23.650 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:18:23.909 [2024-10-09 01:59:43.584204] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:23.909 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:23.909 { 00:18:23.909 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:23.909 "listen_address": { 00:18:23.909 "trtype": "rdma", 00:18:23.909 "traddr": "192.168.100.8", 00:18:23.909 "trsvcid": "4421" 00:18:23.909 }, 00:18:23.909 "method": "nvmf_subsystem_remove_listener", 00:18:23.909 "req_id": 1 00:18:23.909 } 00:18:23.909 Got JSON-RPC error response 00:18:23.909 response: 00:18:23.909 { 00:18:23.909 "code": -32602, 00:18:23.909 "message": "Invalid parameters" 00:18:23.909 }' 00:18:23.909 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:23.909 { 00:18:23.909 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:23.909 "listen_address": { 00:18:23.909 "trtype": "rdma", 00:18:23.909 "traddr": "192.168.100.8", 00:18:23.909 "trsvcid": "4421" 00:18:23.909 }, 00:18:23.909 "method": "nvmf_subsystem_remove_listener", 00:18:23.909 "req_id": 1 00:18:23.909 } 00:18:23.909 Got JSON-RPC error response 00:18:23.909 response: 00:18:23.909 { 00:18:23.909 "code": -32602, 00:18:23.909 "message": "Invalid parameters" 00:18:23.909 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:23.909 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5578 -i 0 00:18:24.169 [2024-10-09 01:59:43.796968] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5578: invalid cntlid range [0-65519] 00:18:24.169 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:24.169 { 00:18:24.169 "nqn": "nqn.2016-06.io.spdk:cnode5578", 00:18:24.169 "min_cntlid": 0, 00:18:24.169 "method": "nvmf_create_subsystem", 00:18:24.169 "req_id": 1 00:18:24.169 } 00:18:24.169 Got JSON-RPC error response 00:18:24.169 response: 00:18:24.169 { 00:18:24.169 "code": -32602, 00:18:24.169 "message": "Invalid cntlid range [0-65519]" 00:18:24.169 }' 00:18:24.169 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:24.169 { 00:18:24.169 "nqn": "nqn.2016-06.io.spdk:cnode5578", 00:18:24.169 "min_cntlid": 0, 00:18:24.169 "method": "nvmf_create_subsystem", 00:18:24.169 "req_id": 1 00:18:24.169 } 00:18:24.169 Got JSON-RPC error response 00:18:24.169 response: 00:18:24.169 { 00:18:24.169 "code": -32602, 00:18:24.169 "message": "Invalid cntlid range [0-65519]" 00:18:24.169 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:24.169 01:59:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3838 -i 65520 00:18:24.428 [2024-10-09 01:59:44.021786] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3838: invalid cntlid range [65520-65519] 00:18:24.428 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:24.428 { 00:18:24.428 "nqn": "nqn.2016-06.io.spdk:cnode3838", 00:18:24.428 "min_cntlid": 65520, 00:18:24.428 "method": "nvmf_create_subsystem", 00:18:24.428 "req_id": 1 00:18:24.428 } 00:18:24.428 Got JSON-RPC error response 00:18:24.428 response: 00:18:24.428 { 00:18:24.428 "code": -32602, 00:18:24.428 "message": "Invalid cntlid range [65520-65519]" 00:18:24.428 }' 00:18:24.428 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:24.428 { 00:18:24.428 "nqn": "nqn.2016-06.io.spdk:cnode3838", 00:18:24.428 "min_cntlid": 65520, 00:18:24.428 "method": "nvmf_create_subsystem", 00:18:24.428 "req_id": 1 00:18:24.428 } 00:18:24.428 Got JSON-RPC error response 00:18:24.428 response: 00:18:24.428 { 00:18:24.428 "code": -32602, 00:18:24.428 "message": "Invalid cntlid range [65520-65519]" 00:18:24.428 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:24.428 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10768 -I 0 00:18:24.428 [2024-10-09 01:59:44.234533] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10768: invalid cntlid range [1-0] 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:24.687 { 00:18:24.687 "nqn": "nqn.2016-06.io.spdk:cnode10768", 00:18:24.687 "max_cntlid": 0, 00:18:24.687 "method": "nvmf_create_subsystem", 00:18:24.687 "req_id": 1 00:18:24.687 } 00:18:24.687 Got JSON-RPC error response 00:18:24.687 response: 00:18:24.687 { 00:18:24.687 "code": -32602, 00:18:24.687 "message": "Invalid cntlid range [1-0]" 00:18:24.687 }' 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:24.687 { 00:18:24.687 "nqn": "nqn.2016-06.io.spdk:cnode10768", 00:18:24.687 "max_cntlid": 0, 00:18:24.687 "method": "nvmf_create_subsystem", 00:18:24.687 "req_id": 1 00:18:24.687 } 00:18:24.687 Got JSON-RPC error response 00:18:24.687 response: 00:18:24.687 { 00:18:24.687 "code": -32602, 00:18:24.687 "message": "Invalid cntlid range [1-0]" 00:18:24.687 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7887 -I 65520 00:18:24.687 [2024-10-09 01:59:44.443290] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7887: invalid cntlid range [1-65520] 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:24.687 { 00:18:24.687 "nqn": "nqn.2016-06.io.spdk:cnode7887", 00:18:24.687 "max_cntlid": 65520, 00:18:24.687 "method": "nvmf_create_subsystem", 00:18:24.687 "req_id": 1 00:18:24.687 } 00:18:24.687 Got JSON-RPC error response 00:18:24.687 response: 00:18:24.687 { 00:18:24.687 "code": -32602, 00:18:24.687 "message": "Invalid cntlid range [1-65520]" 00:18:24.687 }' 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:24.687 { 00:18:24.687 "nqn": "nqn.2016-06.io.spdk:cnode7887", 00:18:24.687 "max_cntlid": 65520, 00:18:24.687 "method": "nvmf_create_subsystem", 00:18:24.687 "req_id": 1 00:18:24.687 } 00:18:24.687 Got JSON-RPC error response 00:18:24.687 response: 00:18:24.687 { 00:18:24.687 "code": -32602, 00:18:24.687 "message": "Invalid cntlid range [1-65520]" 00:18:24.687 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:24.687 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5666 -i 6 -I 5 00:18:24.947 [2024-10-09 01:59:44.648103] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5666: invalid cntlid range [6-5] 00:18:24.947 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:24.947 { 00:18:24.947 "nqn": "nqn.2016-06.io.spdk:cnode5666", 00:18:24.947 "min_cntlid": 6, 00:18:24.947 "max_cntlid": 5, 00:18:24.947 "method": "nvmf_create_subsystem", 00:18:24.947 "req_id": 1 00:18:24.947 } 00:18:24.947 Got JSON-RPC error response 00:18:24.947 response: 00:18:24.947 { 00:18:24.947 "code": -32602, 00:18:24.947 "message": "Invalid cntlid range [6-5]" 00:18:24.947 }' 00:18:24.947 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:24.947 { 00:18:24.947 "nqn": "nqn.2016-06.io.spdk:cnode5666", 00:18:24.947 "min_cntlid": 6, 00:18:24.947 "max_cntlid": 5, 00:18:24.947 "method": "nvmf_create_subsystem", 00:18:24.947 "req_id": 1 00:18:24.947 } 00:18:24.947 Got JSON-RPC error response 00:18:24.947 response: 00:18:24.947 { 00:18:24.947 "code": -32602, 00:18:24.947 "message": "Invalid cntlid range [6-5]" 00:18:24.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:24.947 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:25.207 { 00:18:25.207 "name": "foobar", 00:18:25.207 "method": "nvmf_delete_target", 00:18:25.207 "req_id": 1 00:18:25.207 } 00:18:25.207 Got JSON-RPC error response 00:18:25.207 response: 00:18:25.207 { 00:18:25.207 "code": -32602, 00:18:25.207 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:25.207 }' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:25.207 { 00:18:25.207 "name": "foobar", 00:18:25.207 "method": "nvmf_delete_target", 00:18:25.207 "req_id": 1 00:18:25.207 } 00:18:25.207 Got JSON-RPC error response 00:18:25.207 response: 00:18:25.207 { 00:18:25.207 "code": -32602, 00:18:25.207 "message": "The specified target doesn't exist, cannot delete it." 00:18:25.207 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:25.207 rmmod nvme_rdma 00:18:25.207 rmmod nvme_fabrics 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 3242648 ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 3242648 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3242648 ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3242648 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242648 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242648' 00:18:25.207 killing process with pid 3242648 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3242648 00:18:25.207 01:59:44 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3242648 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:26.587 00:18:26.587 real 0m11.949s 00:18:26.587 user 0m24.579s 00:18:26.587 sys 0m6.013s 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:26.587 ************************************ 00:18:26.587 END TEST nvmf_invalid 00:18:26.587 ************************************ 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.587 ************************************ 00:18:26.587 START TEST nvmf_connect_stress 00:18:26.587 ************************************ 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:26.587 * Looking for test storage... 00:18:26.587 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.587 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.847 --rc genhtml_branch_coverage=1 00:18:26.847 --rc genhtml_function_coverage=1 00:18:26.847 --rc genhtml_legend=1 00:18:26.847 --rc geninfo_all_blocks=1 00:18:26.847 --rc geninfo_unexecuted_blocks=1 00:18:26.847 00:18:26.847 ' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.847 --rc genhtml_branch_coverage=1 00:18:26.847 --rc genhtml_function_coverage=1 00:18:26.847 --rc genhtml_legend=1 00:18:26.847 --rc geninfo_all_blocks=1 00:18:26.847 --rc geninfo_unexecuted_blocks=1 00:18:26.847 00:18:26.847 ' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.847 --rc genhtml_branch_coverage=1 00:18:26.847 --rc genhtml_function_coverage=1 00:18:26.847 --rc genhtml_legend=1 00:18:26.847 --rc geninfo_all_blocks=1 00:18:26.847 --rc geninfo_unexecuted_blocks=1 00:18:26.847 00:18:26.847 ' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.847 --rc genhtml_branch_coverage=1 00:18:26.847 --rc genhtml_function_coverage=1 00:18:26.847 --rc genhtml_legend=1 00:18:26.847 --rc geninfo_all_blocks=1 00:18:26.847 --rc geninfo_unexecuted_blocks=1 00:18:26.847 00:18:26.847 ' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.847 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.848 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.848 01:59:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:18:33.419 Found 0000:18:00.0 (0x8086 - 0x159b) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:18:33.419 Found 0000:18:00.1 (0x8086 - 0x159b) 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:33.419 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@403 -- # modinfo irdma 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:18:33.420 Found net devices under 0000:18:00.0: cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:18:33.420 Found net devices under 0000:18:00.1: cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:33.420 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:33.420 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:18:33.420 altname enp24s0f0np0 00:18:33.420 altname ens785f0np0 00:18:33.420 inet 192.168.100.8/24 scope global cvl_0_0 00:18:33.420 valid_lft forever preferred_lft forever 00:18:33.420 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:18:33.420 valid_lft forever preferred_lft forever 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:33.420 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:33.420 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:18:33.420 altname enp24s0f1np1 00:18:33.420 altname ens785f1np1 00:18:33.420 inet 192.168.100.9/24 scope global cvl_0_1 00:18:33.420 valid_lft forever preferred_lft forever 00:18:33.420 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:18:33.420 valid_lft forever preferred_lft forever 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:33.420 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:33.421 192.168.100.9' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:33.421 192.168.100.9' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # head -n 1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:33.421 192.168.100.9' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # tail -n +2 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # head -n 1 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3246456 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3246456 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3246456 ']' 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.421 01:59:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.421 [2024-10-09 01:59:52.845794] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:18:33.421 [2024-10-09 01:59:52.845919] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.421 [2024-10-09 01:59:52.976858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.421 [2024-10-09 01:59:53.170327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.421 [2024-10-09 01:59:53.170389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.421 [2024-10-09 01:59:53.170402] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.421 [2024-10-09 01:59:53.170416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.421 [2024-10-09 01:59:53.170425] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.421 [2024-10-09 01:59:53.172060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.421 [2024-10-09 01:59:53.172113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.421 [2024-10-09 01:59:53.172122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 [2024-10-09 01:59:53.725241] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:18:33.990 [2024-10-09 01:59:53.734727] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:18:33.990 [2024-10-09 01:59:53.734760] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 [2024-10-09 01:59:53.755114] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 NULL1 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3246649 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:33.990 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.991 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.250 01:59:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.515 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:34.515 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.515 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.515 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.084 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.085 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:35.085 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.085 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.085 01:59:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.344 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.344 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:35.344 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.344 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.344 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.604 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.864 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:35.864 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.864 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.864 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.124 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.124 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:36.124 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.124 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.124 01:59:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.384 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.384 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:36.384 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.384 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.384 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.953 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.953 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:36.953 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.953 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.953 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:37.213 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.213 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 01:59:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.783 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.783 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:37.783 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.783 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.783 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.042 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.042 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:38.042 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.042 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.042 01:59:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.302 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.302 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:38.302 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.302 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.302 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.871 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.871 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:38.871 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.871 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.871 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.131 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.131 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:39.131 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.131 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.131 01:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.700 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:39.700 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.700 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.700 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.960 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:39.960 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.960 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.960 01:59:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.529 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.529 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:40.529 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.529 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.529 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.792 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.792 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:40.792 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.792 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.792 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.051 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.051 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:41.051 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.051 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.051 02:00:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.619 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.619 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:41.619 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.619 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.619 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.879 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.879 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:41.879 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.879 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.879 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:42.139 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.139 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.399 02:00:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.659 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.659 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:42.659 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.659 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.659 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.229 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.229 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:43.229 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.229 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.229 02:00:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.488 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.488 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:43.488 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.488 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.488 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.747 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.747 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:43.747 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.747 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.747 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.316 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.316 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:44.316 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.316 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.316 02:00:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.316 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.575 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.575 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3246649 00:18:44.575 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3246649) - No such process 00:18:44.575 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3246649 00:18:44.575 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:44.575 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:44.576 rmmod nvme_rdma 00:18:44.576 rmmod nvme_fabrics 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3246456 ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3246456 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3246456 ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3246456 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3246456 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:44.576 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3246456' 00:18:44.576 killing process with pid 3246456 00:18:44.835 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3246456 00:18:44.835 02:00:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3246456 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:46.217 00:18:46.217 real 0m19.426s 00:18:46.217 user 0m43.060s 00:18:46.217 sys 0m10.900s 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.217 ************************************ 00:18:46.217 END TEST nvmf_connect_stress 00:18:46.217 ************************************ 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.217 02:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.217 ************************************ 00:18:46.218 START TEST nvmf_fused_ordering 00:18:46.218 ************************************ 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:46.218 * Looking for test storage... 00:18:46.218 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:46.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.218 --rc genhtml_branch_coverage=1 00:18:46.218 --rc genhtml_function_coverage=1 00:18:46.218 --rc genhtml_legend=1 00:18:46.218 --rc geninfo_all_blocks=1 00:18:46.218 --rc geninfo_unexecuted_blocks=1 00:18:46.218 00:18:46.218 ' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:46.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.218 --rc genhtml_branch_coverage=1 00:18:46.218 --rc genhtml_function_coverage=1 00:18:46.218 --rc genhtml_legend=1 00:18:46.218 --rc geninfo_all_blocks=1 00:18:46.218 --rc geninfo_unexecuted_blocks=1 00:18:46.218 00:18:46.218 ' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:46.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.218 --rc genhtml_branch_coverage=1 00:18:46.218 --rc genhtml_function_coverage=1 00:18:46.218 --rc genhtml_legend=1 00:18:46.218 --rc geninfo_all_blocks=1 00:18:46.218 --rc geninfo_unexecuted_blocks=1 00:18:46.218 00:18:46.218 ' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:46.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.218 --rc genhtml_branch_coverage=1 00:18:46.218 --rc genhtml_function_coverage=1 00:18:46.218 --rc genhtml_legend=1 00:18:46.218 --rc geninfo_all_blocks=1 00:18:46.218 --rc geninfo_unexecuted_blocks=1 00:18:46.218 00:18:46.218 ' 00:18:46.218 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:46.219 02:00:05 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.219 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.219 02:00:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.796 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:18:52.797 Found 0000:18:00.0 (0x8086 - 0x159b) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:18:52.797 Found 0000:18:00.1 (0x8086 - 0x159b) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@403 -- # modinfo irdma 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:18:52.797 Found net devices under 0000:18:00.0: cvl_0_0 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:18:52.797 Found net devices under 0000:18:00.1: cvl_0_1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # rdma_device_init 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:52.797 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:52.797 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:18:52.797 altname enp24s0f0np0 00:18:52.797 altname ens785f0np0 00:18:52.797 inet 192.168.100.8/24 scope global cvl_0_0 00:18:52.797 valid_lft forever preferred_lft forever 00:18:52.797 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:18:52.797 valid_lft forever preferred_lft forever 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:52.797 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:52.798 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:52.798 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:18:52.798 altname enp24s0f1np1 00:18:52.798 altname ens785f1np1 00:18:52.798 inet 192.168.100.9/24 scope global cvl_0_1 00:18:52.798 valid_lft forever preferred_lft forever 00:18:52.798 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:18:52.798 valid_lft forever preferred_lft forever 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:52.798 192.168.100.9' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:52.798 192.168.100.9' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # head -n 1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # head -n 1 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:52.798 192.168.100.9' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # tail -n +2 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3251508 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3251508 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3251508 ']' 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.798 02:00:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.798 [2024-10-09 02:00:12.020737] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:18:52.798 [2024-10-09 02:00:12.020841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.798 [2024-10-09 02:00:12.147507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.798 [2024-10-09 02:00:12.333327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.798 [2024-10-09 02:00:12.333383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.798 [2024-10-09 02:00:12.333395] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.798 [2024-10-09 02:00:12.333408] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.798 [2024-10-09 02:00:12.333418] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.798 [2024-10-09 02:00:12.334657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.058 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.058 [2024-10-09 02:00:12.872625] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000289c0/0x617000007c40) succeed. 00:18:53.318 [2024-10-09 02:00:12.882463] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028b40/0x617000007fc0) succeed. 00:18:53.318 [2024-10-09 02:00:12.882496] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.318 [2024-10-09 02:00:12.896310] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.318 NULL1 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.318 02:00:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:53.318 [2024-10-09 02:00:12.962466] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:18:53.318 [2024-10-09 02:00:12.962543] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251695 ] 00:18:53.578 Attached to nqn.2016-06.io.spdk:cnode1 00:18:53.578 Namespace ID: 1 size: 1GB 00:18:53.578 fused_ordering(0) 00:18:53.578 fused_ordering(1) 00:18:53.578 fused_ordering(2) 00:18:53.578 fused_ordering(3) 00:18:53.578 fused_ordering(4) 00:18:53.578 fused_ordering(5) 00:18:53.578 fused_ordering(6) 00:18:53.578 fused_ordering(7) 00:18:53.578 fused_ordering(8) 00:18:53.578 fused_ordering(9) 00:18:53.578 fused_ordering(10) 00:18:53.578 fused_ordering(11) 00:18:53.578 fused_ordering(12) 00:18:53.578 fused_ordering(13) 00:18:53.578 fused_ordering(14) 00:18:53.578 fused_ordering(15) 00:18:53.578 fused_ordering(16) 00:18:53.578 fused_ordering(17) 00:18:53.578 fused_ordering(18) 00:18:53.578 fused_ordering(19) 00:18:53.578 fused_ordering(20) 00:18:53.578 fused_ordering(21) 00:18:53.578 fused_ordering(22) 00:18:53.578 fused_ordering(23) 00:18:53.578 fused_ordering(24) 00:18:53.578 fused_ordering(25) 00:18:53.578 fused_ordering(26) 00:18:53.578 fused_ordering(27) 00:18:53.578 fused_ordering(28) 00:18:53.578 fused_ordering(29) 00:18:53.578 fused_ordering(30) 00:18:53.578 fused_ordering(31) 00:18:53.578 fused_ordering(32) 00:18:53.578 fused_ordering(33) 00:18:53.578 fused_ordering(34) 00:18:53.578 fused_ordering(35) 00:18:53.578 fused_ordering(36) 00:18:53.578 fused_ordering(37) 00:18:53.578 fused_ordering(38) 00:18:53.578 fused_ordering(39) 00:18:53.578 fused_ordering(40) 00:18:53.578 fused_ordering(41) 00:18:53.578 fused_ordering(42) 00:18:53.578 fused_ordering(43) 00:18:53.578 fused_ordering(44) 00:18:53.578 fused_ordering(45) 00:18:53.578 fused_ordering(46) 00:18:53.578 fused_ordering(47) 00:18:53.578 fused_ordering(48) 00:18:53.578 fused_ordering(49) 00:18:53.578 fused_ordering(50) 00:18:53.578 fused_ordering(51) 00:18:53.578 fused_ordering(52) 00:18:53.578 fused_ordering(53) 00:18:53.578 fused_ordering(54) 00:18:53.578 fused_ordering(55) 00:18:53.578 fused_ordering(56) 00:18:53.578 fused_ordering(57) 00:18:53.578 fused_ordering(58) 00:18:53.578 fused_ordering(59) 00:18:53.578 fused_ordering(60) 00:18:53.578 fused_ordering(61) 00:18:53.578 fused_ordering(62) 00:18:53.578 fused_ordering(63) 00:18:53.578 fused_ordering(64) 00:18:53.578 fused_ordering(65) 00:18:53.578 fused_ordering(66) 00:18:53.578 fused_ordering(67) 00:18:53.578 fused_ordering(68) 00:18:53.578 fused_ordering(69) 00:18:53.578 fused_ordering(70) 00:18:53.578 fused_ordering(71) 00:18:53.578 fused_ordering(72) 00:18:53.578 fused_ordering(73) 00:18:53.578 fused_ordering(74) 00:18:53.578 fused_ordering(75) 00:18:53.578 fused_ordering(76) 00:18:53.578 fused_ordering(77) 00:18:53.578 fused_ordering(78) 00:18:53.578 fused_ordering(79) 00:18:53.578 fused_ordering(80) 00:18:53.578 fused_ordering(81) 00:18:53.578 fused_ordering(82) 00:18:53.578 fused_ordering(83) 00:18:53.578 fused_ordering(84) 00:18:53.578 fused_ordering(85) 00:18:53.578 fused_ordering(86) 00:18:53.578 fused_ordering(87) 00:18:53.578 fused_ordering(88) 00:18:53.578 fused_ordering(89) 00:18:53.578 fused_ordering(90) 00:18:53.578 fused_ordering(91) 00:18:53.578 fused_ordering(92) 00:18:53.578 fused_ordering(93) 00:18:53.578 fused_ordering(94) 00:18:53.578 fused_ordering(95) 00:18:53.578 fused_ordering(96) 00:18:53.578 fused_ordering(97) 00:18:53.578 fused_ordering(98) 00:18:53.578 fused_ordering(99) 00:18:53.578 fused_ordering(100) 00:18:53.578 fused_ordering(101) 00:18:53.578 fused_ordering(102) 00:18:53.578 fused_ordering(103) 00:18:53.578 fused_ordering(104) 00:18:53.578 fused_ordering(105) 00:18:53.578 fused_ordering(106) 00:18:53.578 fused_ordering(107) 00:18:53.578 fused_ordering(108) 00:18:53.578 fused_ordering(109) 00:18:53.578 fused_ordering(110) 00:18:53.578 fused_ordering(111) 00:18:53.578 fused_ordering(112) 00:18:53.578 fused_ordering(113) 00:18:53.578 fused_ordering(114) 00:18:53.578 fused_ordering(115) 00:18:53.578 fused_ordering(116) 00:18:53.578 fused_ordering(117) 00:18:53.578 fused_ordering(118) 00:18:53.578 fused_ordering(119) 00:18:53.578 fused_ordering(120) 00:18:53.578 fused_ordering(121) 00:18:53.578 fused_ordering(122) 00:18:53.578 fused_ordering(123) 00:18:53.578 fused_ordering(124) 00:18:53.578 fused_ordering(125) 00:18:53.578 fused_ordering(126) 00:18:53.578 fused_ordering(127) 00:18:53.578 fused_ordering(128) 00:18:53.578 fused_ordering(129) 00:18:53.578 fused_ordering(130) 00:18:53.578 fused_ordering(131) 00:18:53.578 fused_ordering(132) 00:18:53.578 fused_ordering(133) 00:18:53.578 fused_ordering(134) 00:18:53.578 fused_ordering(135) 00:18:53.578 fused_ordering(136) 00:18:53.578 fused_ordering(137) 00:18:53.578 fused_ordering(138) 00:18:53.578 fused_ordering(139) 00:18:53.578 fused_ordering(140) 00:18:53.578 fused_ordering(141) 00:18:53.578 fused_ordering(142) 00:18:53.578 fused_ordering(143) 00:18:53.578 fused_ordering(144) 00:18:53.578 fused_ordering(145) 00:18:53.578 fused_ordering(146) 00:18:53.578 fused_ordering(147) 00:18:53.578 fused_ordering(148) 00:18:53.578 fused_ordering(149) 00:18:53.578 fused_ordering(150) 00:18:53.578 fused_ordering(151) 00:18:53.578 fused_ordering(152) 00:18:53.578 fused_ordering(153) 00:18:53.578 fused_ordering(154) 00:18:53.578 fused_ordering(155) 00:18:53.578 fused_ordering(156) 00:18:53.578 fused_ordering(157) 00:18:53.578 fused_ordering(158) 00:18:53.578 fused_ordering(159) 00:18:53.578 fused_ordering(160) 00:18:53.578 fused_ordering(161) 00:18:53.578 fused_ordering(162) 00:18:53.578 fused_ordering(163) 00:18:53.578 fused_ordering(164) 00:18:53.578 fused_ordering(165) 00:18:53.578 fused_ordering(166) 00:18:53.578 fused_ordering(167) 00:18:53.578 fused_ordering(168) 00:18:53.578 fused_ordering(169) 00:18:53.578 fused_ordering(170) 00:18:53.578 fused_ordering(171) 00:18:53.579 fused_ordering(172) 00:18:53.579 fused_ordering(173) 00:18:53.579 fused_ordering(174) 00:18:53.579 fused_ordering(175) 00:18:53.579 fused_ordering(176) 00:18:53.579 fused_ordering(177) 00:18:53.579 fused_ordering(178) 00:18:53.579 fused_ordering(179) 00:18:53.579 fused_ordering(180) 00:18:53.579 fused_ordering(181) 00:18:53.579 fused_ordering(182) 00:18:53.579 fused_ordering(183) 00:18:53.579 fused_ordering(184) 00:18:53.579 fused_ordering(185) 00:18:53.579 fused_ordering(186) 00:18:53.579 fused_ordering(187) 00:18:53.579 fused_ordering(188) 00:18:53.579 fused_ordering(189) 00:18:53.579 fused_ordering(190) 00:18:53.579 fused_ordering(191) 00:18:53.579 fused_ordering(192) 00:18:53.579 fused_ordering(193) 00:18:53.579 fused_ordering(194) 00:18:53.579 fused_ordering(195) 00:18:53.579 fused_ordering(196) 00:18:53.579 fused_ordering(197) 00:18:53.579 fused_ordering(198) 00:18:53.579 fused_ordering(199) 00:18:53.579 fused_ordering(200) 00:18:53.579 fused_ordering(201) 00:18:53.579 fused_ordering(202) 00:18:53.579 fused_ordering(203) 00:18:53.579 fused_ordering(204) 00:18:53.579 fused_ordering(205) 00:18:53.579 fused_ordering(206) 00:18:53.579 fused_ordering(207) 00:18:53.579 fused_ordering(208) 00:18:53.579 fused_ordering(209) 00:18:53.579 fused_ordering(210) 00:18:53.579 fused_ordering(211) 00:18:53.579 fused_ordering(212) 00:18:53.579 fused_ordering(213) 00:18:53.579 fused_ordering(214) 00:18:53.579 fused_ordering(215) 00:18:53.579 fused_ordering(216) 00:18:53.579 fused_ordering(217) 00:18:53.579 fused_ordering(218) 00:18:53.579 fused_ordering(219) 00:18:53.579 fused_ordering(220) 00:18:53.579 fused_ordering(221) 00:18:53.579 fused_ordering(222) 00:18:53.579 fused_ordering(223) 00:18:53.579 fused_ordering(224) 00:18:53.579 fused_ordering(225) 00:18:53.579 fused_ordering(226) 00:18:53.579 fused_ordering(227) 00:18:53.579 fused_ordering(228) 00:18:53.579 fused_ordering(229) 00:18:53.579 fused_ordering(230) 00:18:53.579 fused_ordering(231) 00:18:53.579 fused_ordering(232) 00:18:53.579 fused_ordering(233) 00:18:53.579 fused_ordering(234) 00:18:53.579 fused_ordering(235) 00:18:53.579 fused_ordering(236) 00:18:53.579 fused_ordering(237) 00:18:53.579 fused_ordering(238) 00:18:53.579 fused_ordering(239) 00:18:53.579 fused_ordering(240) 00:18:53.579 fused_ordering(241) 00:18:53.579 fused_ordering(242) 00:18:53.579 fused_ordering(243) 00:18:53.579 fused_ordering(244) 00:18:53.579 fused_ordering(245) 00:18:53.579 fused_ordering(246) 00:18:53.579 fused_ordering(247) 00:18:53.579 fused_ordering(248) 00:18:53.579 fused_ordering(249) 00:18:53.579 fused_ordering(250) 00:18:53.579 fused_ordering(251) 00:18:53.579 fused_ordering(252) 00:18:53.579 fused_ordering(253) 00:18:53.579 fused_ordering(254) 00:18:53.579 fused_ordering(255) 00:18:53.579 fused_ordering(256) 00:18:53.579 fused_ordering(257) 00:18:53.579 fused_ordering(258) 00:18:53.579 fused_ordering(259) 00:18:53.579 fused_ordering(260) 00:18:53.579 fused_ordering(261) 00:18:53.579 fused_ordering(262) 00:18:53.579 fused_ordering(263) 00:18:53.579 fused_ordering(264) 00:18:53.579 fused_ordering(265) 00:18:53.579 fused_ordering(266) 00:18:53.579 fused_ordering(267) 00:18:53.579 fused_ordering(268) 00:18:53.579 fused_ordering(269) 00:18:53.579 fused_ordering(270) 00:18:53.579 fused_ordering(271) 00:18:53.579 fused_ordering(272) 00:18:53.579 fused_ordering(273) 00:18:53.579 fused_ordering(274) 00:18:53.579 fused_ordering(275) 00:18:53.579 fused_ordering(276) 00:18:53.579 fused_ordering(277) 00:18:53.579 fused_ordering(278) 00:18:53.579 fused_ordering(279) 00:18:53.579 fused_ordering(280) 00:18:53.579 fused_ordering(281) 00:18:53.579 fused_ordering(282) 00:18:53.579 fused_ordering(283) 00:18:53.579 fused_ordering(284) 00:18:53.579 fused_ordering(285) 00:18:53.579 fused_ordering(286) 00:18:53.579 fused_ordering(287) 00:18:53.579 fused_ordering(288) 00:18:53.579 fused_ordering(289) 00:18:53.579 fused_ordering(290) 00:18:53.579 fused_ordering(291) 00:18:53.579 fused_ordering(292) 00:18:53.579 fused_ordering(293) 00:18:53.579 fused_ordering(294) 00:18:53.579 fused_ordering(295) 00:18:53.579 fused_ordering(296) 00:18:53.579 fused_ordering(297) 00:18:53.579 fused_ordering(298) 00:18:53.579 fused_ordering(299) 00:18:53.579 fused_ordering(300) 00:18:53.579 fused_ordering(301) 00:18:53.579 fused_ordering(302) 00:18:53.579 fused_ordering(303) 00:18:53.579 fused_ordering(304) 00:18:53.579 fused_ordering(305) 00:18:53.579 fused_ordering(306) 00:18:53.579 fused_ordering(307) 00:18:53.579 fused_ordering(308) 00:18:53.579 fused_ordering(309) 00:18:53.579 fused_ordering(310) 00:18:53.579 fused_ordering(311) 00:18:53.579 fused_ordering(312) 00:18:53.579 fused_ordering(313) 00:18:53.579 fused_ordering(314) 00:18:53.579 fused_ordering(315) 00:18:53.579 fused_ordering(316) 00:18:53.579 fused_ordering(317) 00:18:53.579 fused_ordering(318) 00:18:53.579 fused_ordering(319) 00:18:53.579 fused_ordering(320) 00:18:53.579 fused_ordering(321) 00:18:53.579 fused_ordering(322) 00:18:53.579 fused_ordering(323) 00:18:53.579 fused_ordering(324) 00:18:53.579 fused_ordering(325) 00:18:53.579 fused_ordering(326) 00:18:53.579 fused_ordering(327) 00:18:53.579 fused_ordering(328) 00:18:53.579 fused_ordering(329) 00:18:53.579 fused_ordering(330) 00:18:53.579 fused_ordering(331) 00:18:53.579 fused_ordering(332) 00:18:53.579 fused_ordering(333) 00:18:53.579 fused_ordering(334) 00:18:53.579 fused_ordering(335) 00:18:53.579 fused_ordering(336) 00:18:53.579 fused_ordering(337) 00:18:53.579 fused_ordering(338) 00:18:53.579 fused_ordering(339) 00:18:53.579 fused_ordering(340) 00:18:53.579 fused_ordering(341) 00:18:53.579 fused_ordering(342) 00:18:53.579 fused_ordering(343) 00:18:53.579 fused_ordering(344) 00:18:53.579 fused_ordering(345) 00:18:53.579 fused_ordering(346) 00:18:53.579 fused_ordering(347) 00:18:53.579 fused_ordering(348) 00:18:53.579 fused_ordering(349) 00:18:53.579 fused_ordering(350) 00:18:53.579 fused_ordering(351) 00:18:53.579 fused_ordering(352) 00:18:53.579 fused_ordering(353) 00:18:53.579 fused_ordering(354) 00:18:53.579 fused_ordering(355) 00:18:53.579 fused_ordering(356) 00:18:53.579 fused_ordering(357) 00:18:53.579 fused_ordering(358) 00:18:53.579 fused_ordering(359) 00:18:53.579 fused_ordering(360) 00:18:53.579 fused_ordering(361) 00:18:53.579 fused_ordering(362) 00:18:53.579 fused_ordering(363) 00:18:53.579 fused_ordering(364) 00:18:53.579 fused_ordering(365) 00:18:53.579 fused_ordering(366) 00:18:53.579 fused_ordering(367) 00:18:53.579 fused_ordering(368) 00:18:53.579 fused_ordering(369) 00:18:53.579 fused_ordering(370) 00:18:53.579 fused_ordering(371) 00:18:53.579 fused_ordering(372) 00:18:53.579 fused_ordering(373) 00:18:53.579 fused_ordering(374) 00:18:53.579 fused_ordering(375) 00:18:53.579 fused_ordering(376) 00:18:53.579 fused_ordering(377) 00:18:53.579 fused_ordering(378) 00:18:53.579 fused_ordering(379) 00:18:53.579 fused_ordering(380) 00:18:53.579 fused_ordering(381) 00:18:53.579 fused_ordering(382) 00:18:53.579 fused_ordering(383) 00:18:53.579 fused_ordering(384) 00:18:53.579 fused_ordering(385) 00:18:53.579 fused_ordering(386) 00:18:53.579 fused_ordering(387) 00:18:53.579 fused_ordering(388) 00:18:53.579 fused_ordering(389) 00:18:53.579 fused_ordering(390) 00:18:53.579 fused_ordering(391) 00:18:53.579 fused_ordering(392) 00:18:53.579 fused_ordering(393) 00:18:53.579 fused_ordering(394) 00:18:53.579 fused_ordering(395) 00:18:53.579 fused_ordering(396) 00:18:53.579 fused_ordering(397) 00:18:53.579 fused_ordering(398) 00:18:53.579 fused_ordering(399) 00:18:53.579 fused_ordering(400) 00:18:53.579 fused_ordering(401) 00:18:53.579 fused_ordering(402) 00:18:53.579 fused_ordering(403) 00:18:53.579 fused_ordering(404) 00:18:53.579 fused_ordering(405) 00:18:53.579 fused_ordering(406) 00:18:53.579 fused_ordering(407) 00:18:53.579 fused_ordering(408) 00:18:53.579 fused_ordering(409) 00:18:53.579 fused_ordering(410) 00:18:53.839 fused_ordering(411) 00:18:53.839 fused_ordering(412) 00:18:53.839 fused_ordering(413) 00:18:53.839 fused_ordering(414) 00:18:53.839 fused_ordering(415) 00:18:53.839 fused_ordering(416) 00:18:53.839 fused_ordering(417) 00:18:53.839 fused_ordering(418) 00:18:53.839 fused_ordering(419) 00:18:53.839 fused_ordering(420) 00:18:53.839 fused_ordering(421) 00:18:53.839 fused_ordering(422) 00:18:53.839 fused_ordering(423) 00:18:53.839 fused_ordering(424) 00:18:53.839 fused_ordering(425) 00:18:53.839 fused_ordering(426) 00:18:53.839 fused_ordering(427) 00:18:53.839 fused_ordering(428) 00:18:53.839 fused_ordering(429) 00:18:53.839 fused_ordering(430) 00:18:53.839 fused_ordering(431) 00:18:53.839 fused_ordering(432) 00:18:53.839 fused_ordering(433) 00:18:53.839 fused_ordering(434) 00:18:53.839 fused_ordering(435) 00:18:53.839 fused_ordering(436) 00:18:53.839 fused_ordering(437) 00:18:53.839 fused_ordering(438) 00:18:53.839 fused_ordering(439) 00:18:53.839 fused_ordering(440) 00:18:53.839 fused_ordering(441) 00:18:53.839 fused_ordering(442) 00:18:53.839 fused_ordering(443) 00:18:53.839 fused_ordering(444) 00:18:53.839 fused_ordering(445) 00:18:53.839 fused_ordering(446) 00:18:53.839 fused_ordering(447) 00:18:53.839 fused_ordering(448) 00:18:53.839 fused_ordering(449) 00:18:53.839 fused_ordering(450) 00:18:53.839 fused_ordering(451) 00:18:53.839 fused_ordering(452) 00:18:53.839 fused_ordering(453) 00:18:53.839 fused_ordering(454) 00:18:53.839 fused_ordering(455) 00:18:53.839 fused_ordering(456) 00:18:53.839 fused_ordering(457) 00:18:53.839 fused_ordering(458) 00:18:53.839 fused_ordering(459) 00:18:53.839 fused_ordering(460) 00:18:53.839 fused_ordering(461) 00:18:53.839 fused_ordering(462) 00:18:53.839 fused_ordering(463) 00:18:53.839 fused_ordering(464) 00:18:53.839 fused_ordering(465) 00:18:53.840 fused_ordering(466) 00:18:53.840 fused_ordering(467) 00:18:53.840 fused_ordering(468) 00:18:53.840 fused_ordering(469) 00:18:53.840 fused_ordering(470) 00:18:53.840 fused_ordering(471) 00:18:53.840 fused_ordering(472) 00:18:53.840 fused_ordering(473) 00:18:53.840 fused_ordering(474) 00:18:53.840 fused_ordering(475) 00:18:53.840 fused_ordering(476) 00:18:53.840 fused_ordering(477) 00:18:53.840 fused_ordering(478) 00:18:53.840 fused_ordering(479) 00:18:53.840 fused_ordering(480) 00:18:53.840 fused_ordering(481) 00:18:53.840 fused_ordering(482) 00:18:53.840 fused_ordering(483) 00:18:53.840 fused_ordering(484) 00:18:53.840 fused_ordering(485) 00:18:53.840 fused_ordering(486) 00:18:53.840 fused_ordering(487) 00:18:53.840 fused_ordering(488) 00:18:53.840 fused_ordering(489) 00:18:53.840 fused_ordering(490) 00:18:53.840 fused_ordering(491) 00:18:53.840 fused_ordering(492) 00:18:53.840 fused_ordering(493) 00:18:53.840 fused_ordering(494) 00:18:53.840 fused_ordering(495) 00:18:53.840 fused_ordering(496) 00:18:53.840 fused_ordering(497) 00:18:53.840 fused_ordering(498) 00:18:53.840 fused_ordering(499) 00:18:53.840 fused_ordering(500) 00:18:53.840 fused_ordering(501) 00:18:53.840 fused_ordering(502) 00:18:53.840 fused_ordering(503) 00:18:53.840 fused_ordering(504) 00:18:53.840 fused_ordering(505) 00:18:53.840 fused_ordering(506) 00:18:53.840 fused_ordering(507) 00:18:53.840 fused_ordering(508) 00:18:53.840 fused_ordering(509) 00:18:53.840 fused_ordering(510) 00:18:53.840 fused_ordering(511) 00:18:53.840 fused_ordering(512) 00:18:53.840 fused_ordering(513) 00:18:53.840 fused_ordering(514) 00:18:53.840 fused_ordering(515) 00:18:53.840 fused_ordering(516) 00:18:53.840 fused_ordering(517) 00:18:53.840 fused_ordering(518) 00:18:53.840 fused_ordering(519) 00:18:53.840 fused_ordering(520) 00:18:53.840 fused_ordering(521) 00:18:53.840 fused_ordering(522) 00:18:53.840 fused_ordering(523) 00:18:53.840 fused_ordering(524) 00:18:53.840 fused_ordering(525) 00:18:53.840 fused_ordering(526) 00:18:53.840 fused_ordering(527) 00:18:53.840 fused_ordering(528) 00:18:53.840 fused_ordering(529) 00:18:53.840 fused_ordering(530) 00:18:53.840 fused_ordering(531) 00:18:53.840 fused_ordering(532) 00:18:53.840 fused_ordering(533) 00:18:53.840 fused_ordering(534) 00:18:53.840 fused_ordering(535) 00:18:53.840 fused_ordering(536) 00:18:53.840 fused_ordering(537) 00:18:53.840 fused_ordering(538) 00:18:53.840 fused_ordering(539) 00:18:53.840 fused_ordering(540) 00:18:53.840 fused_ordering(541) 00:18:53.840 fused_ordering(542) 00:18:53.840 fused_ordering(543) 00:18:53.840 fused_ordering(544) 00:18:53.840 fused_ordering(545) 00:18:53.840 fused_ordering(546) 00:18:53.840 fused_ordering(547) 00:18:53.840 fused_ordering(548) 00:18:53.840 fused_ordering(549) 00:18:53.840 fused_ordering(550) 00:18:53.840 fused_ordering(551) 00:18:53.840 fused_ordering(552) 00:18:53.840 fused_ordering(553) 00:18:53.840 fused_ordering(554) 00:18:53.840 fused_ordering(555) 00:18:53.840 fused_ordering(556) 00:18:53.840 fused_ordering(557) 00:18:53.840 fused_ordering(558) 00:18:53.840 fused_ordering(559) 00:18:53.840 fused_ordering(560) 00:18:53.840 fused_ordering(561) 00:18:53.840 fused_ordering(562) 00:18:53.840 fused_ordering(563) 00:18:53.840 fused_ordering(564) 00:18:53.840 fused_ordering(565) 00:18:53.840 fused_ordering(566) 00:18:53.840 fused_ordering(567) 00:18:53.840 fused_ordering(568) 00:18:53.840 fused_ordering(569) 00:18:53.840 fused_ordering(570) 00:18:53.840 fused_ordering(571) 00:18:53.840 fused_ordering(572) 00:18:53.840 fused_ordering(573) 00:18:53.840 fused_ordering(574) 00:18:53.840 fused_ordering(575) 00:18:53.840 fused_ordering(576) 00:18:53.840 fused_ordering(577) 00:18:53.840 fused_ordering(578) 00:18:53.840 fused_ordering(579) 00:18:53.840 fused_ordering(580) 00:18:53.840 fused_ordering(581) 00:18:53.840 fused_ordering(582) 00:18:53.840 fused_ordering(583) 00:18:53.840 fused_ordering(584) 00:18:53.840 fused_ordering(585) 00:18:53.840 fused_ordering(586) 00:18:53.840 fused_ordering(587) 00:18:53.840 fused_ordering(588) 00:18:53.840 fused_ordering(589) 00:18:53.840 fused_ordering(590) 00:18:53.840 fused_ordering(591) 00:18:53.840 fused_ordering(592) 00:18:53.840 fused_ordering(593) 00:18:53.840 fused_ordering(594) 00:18:53.840 fused_ordering(595) 00:18:53.840 fused_ordering(596) 00:18:53.840 fused_ordering(597) 00:18:53.840 fused_ordering(598) 00:18:53.840 fused_ordering(599) 00:18:53.840 fused_ordering(600) 00:18:53.840 fused_ordering(601) 00:18:53.840 fused_ordering(602) 00:18:53.840 fused_ordering(603) 00:18:53.840 fused_ordering(604) 00:18:53.840 fused_ordering(605) 00:18:53.840 fused_ordering(606) 00:18:53.840 fused_ordering(607) 00:18:53.840 fused_ordering(608) 00:18:53.840 fused_ordering(609) 00:18:53.840 fused_ordering(610) 00:18:53.840 fused_ordering(611) 00:18:53.840 fused_ordering(612) 00:18:53.840 fused_ordering(613) 00:18:53.840 fused_ordering(614) 00:18:53.840 fused_ordering(615) 00:18:53.840 fused_ordering(616) 00:18:53.840 fused_ordering(617) 00:18:53.840 fused_ordering(618) 00:18:53.840 fused_ordering(619) 00:18:53.840 fused_ordering(620) 00:18:53.840 fused_ordering(621) 00:18:53.840 fused_ordering(622) 00:18:53.840 fused_ordering(623) 00:18:53.840 fused_ordering(624) 00:18:53.840 fused_ordering(625) 00:18:53.840 fused_ordering(626) 00:18:53.840 fused_ordering(627) 00:18:53.840 fused_ordering(628) 00:18:53.840 fused_ordering(629) 00:18:53.840 fused_ordering(630) 00:18:53.840 fused_ordering(631) 00:18:53.840 fused_ordering(632) 00:18:53.840 fused_ordering(633) 00:18:53.840 fused_ordering(634) 00:18:53.840 fused_ordering(635) 00:18:53.840 fused_ordering(636) 00:18:53.840 fused_ordering(637) 00:18:53.840 fused_ordering(638) 00:18:53.840 fused_ordering(639) 00:18:53.840 fused_ordering(640) 00:18:53.840 fused_ordering(641) 00:18:53.840 fused_ordering(642) 00:18:53.840 fused_ordering(643) 00:18:53.840 fused_ordering(644) 00:18:53.840 fused_ordering(645) 00:18:53.840 fused_ordering(646) 00:18:53.840 fused_ordering(647) 00:18:53.840 fused_ordering(648) 00:18:53.840 fused_ordering(649) 00:18:53.840 fused_ordering(650) 00:18:53.840 fused_ordering(651) 00:18:53.840 fused_ordering(652) 00:18:53.840 fused_ordering(653) 00:18:53.840 fused_ordering(654) 00:18:53.840 fused_ordering(655) 00:18:53.840 fused_ordering(656) 00:18:53.840 fused_ordering(657) 00:18:53.840 fused_ordering(658) 00:18:53.840 fused_ordering(659) 00:18:53.840 fused_ordering(660) 00:18:53.840 fused_ordering(661) 00:18:53.840 fused_ordering(662) 00:18:53.840 fused_ordering(663) 00:18:53.840 fused_ordering(664) 00:18:53.840 fused_ordering(665) 00:18:53.840 fused_ordering(666) 00:18:53.840 fused_ordering(667) 00:18:53.840 fused_ordering(668) 00:18:53.840 fused_ordering(669) 00:18:53.840 fused_ordering(670) 00:18:53.840 fused_ordering(671) 00:18:53.840 fused_ordering(672) 00:18:53.840 fused_ordering(673) 00:18:53.840 fused_ordering(674) 00:18:53.840 fused_ordering(675) 00:18:53.840 fused_ordering(676) 00:18:53.840 fused_ordering(677) 00:18:53.840 fused_ordering(678) 00:18:53.840 fused_ordering(679) 00:18:53.840 fused_ordering(680) 00:18:53.840 fused_ordering(681) 00:18:53.840 fused_ordering(682) 00:18:53.840 fused_ordering(683) 00:18:53.840 fused_ordering(684) 00:18:53.840 fused_ordering(685) 00:18:53.840 fused_ordering(686) 00:18:53.840 fused_ordering(687) 00:18:53.840 fused_ordering(688) 00:18:53.840 fused_ordering(689) 00:18:53.840 fused_ordering(690) 00:18:53.840 fused_ordering(691) 00:18:53.840 fused_ordering(692) 00:18:53.840 fused_ordering(693) 00:18:53.840 fused_ordering(694) 00:18:53.840 fused_ordering(695) 00:18:53.840 fused_ordering(696) 00:18:53.840 fused_ordering(697) 00:18:53.840 fused_ordering(698) 00:18:53.840 fused_ordering(699) 00:18:53.840 fused_ordering(700) 00:18:53.840 fused_ordering(701) 00:18:53.840 fused_ordering(702) 00:18:53.840 fused_ordering(703) 00:18:53.841 fused_ordering(704) 00:18:53.841 fused_ordering(705) 00:18:53.841 fused_ordering(706) 00:18:53.841 fused_ordering(707) 00:18:53.841 fused_ordering(708) 00:18:53.841 fused_ordering(709) 00:18:53.841 fused_ordering(710) 00:18:53.841 fused_ordering(711) 00:18:53.841 fused_ordering(712) 00:18:53.841 fused_ordering(713) 00:18:53.841 fused_ordering(714) 00:18:53.841 fused_ordering(715) 00:18:53.841 fused_ordering(716) 00:18:53.841 fused_ordering(717) 00:18:53.841 fused_ordering(718) 00:18:53.841 fused_ordering(719) 00:18:53.841 fused_ordering(720) 00:18:53.841 fused_ordering(721) 00:18:53.841 fused_ordering(722) 00:18:53.841 fused_ordering(723) 00:18:53.841 fused_ordering(724) 00:18:53.841 fused_ordering(725) 00:18:53.841 fused_ordering(726) 00:18:53.841 fused_ordering(727) 00:18:53.841 fused_ordering(728) 00:18:53.841 fused_ordering(729) 00:18:53.841 fused_ordering(730) 00:18:53.841 fused_ordering(731) 00:18:53.841 fused_ordering(732) 00:18:53.841 fused_ordering(733) 00:18:53.841 fused_ordering(734) 00:18:53.841 fused_ordering(735) 00:18:53.841 fused_ordering(736) 00:18:53.841 fused_ordering(737) 00:18:53.841 fused_ordering(738) 00:18:53.841 fused_ordering(739) 00:18:53.841 fused_ordering(740) 00:18:53.841 fused_ordering(741) 00:18:53.841 fused_ordering(742) 00:18:53.841 fused_ordering(743) 00:18:53.841 fused_ordering(744) 00:18:53.841 fused_ordering(745) 00:18:53.841 fused_ordering(746) 00:18:53.841 fused_ordering(747) 00:18:53.841 fused_ordering(748) 00:18:53.841 fused_ordering(749) 00:18:53.841 fused_ordering(750) 00:18:53.841 fused_ordering(751) 00:18:53.841 fused_ordering(752) 00:18:53.841 fused_ordering(753) 00:18:53.841 fused_ordering(754) 00:18:53.841 fused_ordering(755) 00:18:53.841 fused_ordering(756) 00:18:53.841 fused_ordering(757) 00:18:53.841 fused_ordering(758) 00:18:53.841 fused_ordering(759) 00:18:53.841 fused_ordering(760) 00:18:53.841 fused_ordering(761) 00:18:53.841 fused_ordering(762) 00:18:53.841 fused_ordering(763) 00:18:53.841 fused_ordering(764) 00:18:53.841 fused_ordering(765) 00:18:53.841 fused_ordering(766) 00:18:53.841 fused_ordering(767) 00:18:53.841 fused_ordering(768) 00:18:53.841 fused_ordering(769) 00:18:53.841 fused_ordering(770) 00:18:53.841 fused_ordering(771) 00:18:53.841 fused_ordering(772) 00:18:53.841 fused_ordering(773) 00:18:53.841 fused_ordering(774) 00:18:53.841 fused_ordering(775) 00:18:53.841 fused_ordering(776) 00:18:53.841 fused_ordering(777) 00:18:53.841 fused_ordering(778) 00:18:53.841 fused_ordering(779) 00:18:53.841 fused_ordering(780) 00:18:53.841 fused_ordering(781) 00:18:53.841 fused_ordering(782) 00:18:53.841 fused_ordering(783) 00:18:53.841 fused_ordering(784) 00:18:53.841 fused_ordering(785) 00:18:53.841 fused_ordering(786) 00:18:53.841 fused_ordering(787) 00:18:53.841 fused_ordering(788) 00:18:53.841 fused_ordering(789) 00:18:53.841 fused_ordering(790) 00:18:53.841 fused_ordering(791) 00:18:53.841 fused_ordering(792) 00:18:53.841 fused_ordering(793) 00:18:53.841 fused_ordering(794) 00:18:53.841 fused_ordering(795) 00:18:53.841 fused_ordering(796) 00:18:53.841 fused_ordering(797) 00:18:53.841 fused_ordering(798) 00:18:53.841 fused_ordering(799) 00:18:53.841 fused_ordering(800) 00:18:53.841 fused_ordering(801) 00:18:53.841 fused_ordering(802) 00:18:53.841 fused_ordering(803) 00:18:53.841 fused_ordering(804) 00:18:53.841 fused_ordering(805) 00:18:53.841 fused_ordering(806) 00:18:53.841 fused_ordering(807) 00:18:53.841 fused_ordering(808) 00:18:53.841 fused_ordering(809) 00:18:53.841 fused_ordering(810) 00:18:53.841 fused_ordering(811) 00:18:53.841 fused_ordering(812) 00:18:53.841 fused_ordering(813) 00:18:53.841 fused_ordering(814) 00:18:53.841 fused_ordering(815) 00:18:53.841 fused_ordering(816) 00:18:53.841 fused_ordering(817) 00:18:53.841 fused_ordering(818) 00:18:53.841 fused_ordering(819) 00:18:53.841 fused_ordering(820) 00:18:54.100 fused_ordering(821) 00:18:54.100 fused_ordering(822) 00:18:54.100 fused_ordering(823) 00:18:54.100 fused_ordering(824) 00:18:54.100 fused_ordering(825) 00:18:54.100 fused_ordering(826) 00:18:54.100 fused_ordering(827) 00:18:54.100 fused_ordering(828) 00:18:54.100 fused_ordering(829) 00:18:54.100 fused_ordering(830) 00:18:54.100 fused_ordering(831) 00:18:54.100 fused_ordering(832) 00:18:54.100 fused_ordering(833) 00:18:54.100 fused_ordering(834) 00:18:54.100 fused_ordering(835) 00:18:54.100 fused_ordering(836) 00:18:54.100 fused_ordering(837) 00:18:54.100 fused_ordering(838) 00:18:54.100 fused_ordering(839) 00:18:54.100 fused_ordering(840) 00:18:54.100 fused_ordering(841) 00:18:54.100 fused_ordering(842) 00:18:54.100 fused_ordering(843) 00:18:54.100 fused_ordering(844) 00:18:54.100 fused_ordering(845) 00:18:54.100 fused_ordering(846) 00:18:54.100 fused_ordering(847) 00:18:54.100 fused_ordering(848) 00:18:54.100 fused_ordering(849) 00:18:54.100 fused_ordering(850) 00:18:54.100 fused_ordering(851) 00:18:54.100 fused_ordering(852) 00:18:54.100 fused_ordering(853) 00:18:54.100 fused_ordering(854) 00:18:54.101 fused_ordering(855) 00:18:54.101 fused_ordering(856) 00:18:54.101 fused_ordering(857) 00:18:54.101 fused_ordering(858) 00:18:54.101 fused_ordering(859) 00:18:54.101 fused_ordering(860) 00:18:54.101 fused_ordering(861) 00:18:54.101 fused_ordering(862) 00:18:54.101 fused_ordering(863) 00:18:54.101 fused_ordering(864) 00:18:54.101 fused_ordering(865) 00:18:54.101 fused_ordering(866) 00:18:54.101 fused_ordering(867) 00:18:54.101 fused_ordering(868) 00:18:54.101 fused_ordering(869) 00:18:54.101 fused_ordering(870) 00:18:54.101 fused_ordering(871) 00:18:54.101 fused_ordering(872) 00:18:54.101 fused_ordering(873) 00:18:54.101 fused_ordering(874) 00:18:54.101 fused_ordering(875) 00:18:54.101 fused_ordering(876) 00:18:54.101 fused_ordering(877) 00:18:54.101 fused_ordering(878) 00:18:54.101 fused_ordering(879) 00:18:54.101 fused_ordering(880) 00:18:54.101 fused_ordering(881) 00:18:54.101 fused_ordering(882) 00:18:54.101 fused_ordering(883) 00:18:54.101 fused_ordering(884) 00:18:54.101 fused_ordering(885) 00:18:54.101 fused_ordering(886) 00:18:54.101 fused_ordering(887) 00:18:54.101 fused_ordering(888) 00:18:54.101 fused_ordering(889) 00:18:54.101 fused_ordering(890) 00:18:54.101 fused_ordering(891) 00:18:54.101 fused_ordering(892) 00:18:54.101 fused_ordering(893) 00:18:54.101 fused_ordering(894) 00:18:54.101 fused_ordering(895) 00:18:54.101 fused_ordering(896) 00:18:54.101 fused_ordering(897) 00:18:54.101 fused_ordering(898) 00:18:54.101 fused_ordering(899) 00:18:54.101 fused_ordering(900) 00:18:54.101 fused_ordering(901) 00:18:54.101 fused_ordering(902) 00:18:54.101 fused_ordering(903) 00:18:54.101 fused_ordering(904) 00:18:54.101 fused_ordering(905) 00:18:54.101 fused_ordering(906) 00:18:54.101 fused_ordering(907) 00:18:54.101 fused_ordering(908) 00:18:54.101 fused_ordering(909) 00:18:54.101 fused_ordering(910) 00:18:54.101 fused_ordering(911) 00:18:54.101 fused_ordering(912) 00:18:54.101 fused_ordering(913) 00:18:54.101 fused_ordering(914) 00:18:54.101 fused_ordering(915) 00:18:54.101 fused_ordering(916) 00:18:54.101 fused_ordering(917) 00:18:54.101 fused_ordering(918) 00:18:54.101 fused_ordering(919) 00:18:54.101 fused_ordering(920) 00:18:54.101 fused_ordering(921) 00:18:54.101 fused_ordering(922) 00:18:54.101 fused_ordering(923) 00:18:54.101 fused_ordering(924) 00:18:54.101 fused_ordering(925) 00:18:54.101 fused_ordering(926) 00:18:54.101 fused_ordering(927) 00:18:54.101 fused_ordering(928) 00:18:54.101 fused_ordering(929) 00:18:54.101 fused_ordering(930) 00:18:54.101 fused_ordering(931) 00:18:54.101 fused_ordering(932) 00:18:54.101 fused_ordering(933) 00:18:54.101 fused_ordering(934) 00:18:54.101 fused_ordering(935) 00:18:54.101 fused_ordering(936) 00:18:54.101 fused_ordering(937) 00:18:54.101 fused_ordering(938) 00:18:54.101 fused_ordering(939) 00:18:54.101 fused_ordering(940) 00:18:54.101 fused_ordering(941) 00:18:54.101 fused_ordering(942) 00:18:54.101 fused_ordering(943) 00:18:54.101 fused_ordering(944) 00:18:54.101 fused_ordering(945) 00:18:54.101 fused_ordering(946) 00:18:54.101 fused_ordering(947) 00:18:54.101 fused_ordering(948) 00:18:54.101 fused_ordering(949) 00:18:54.101 fused_ordering(950) 00:18:54.101 fused_ordering(951) 00:18:54.101 fused_ordering(952) 00:18:54.101 fused_ordering(953) 00:18:54.101 fused_ordering(954) 00:18:54.101 fused_ordering(955) 00:18:54.101 fused_ordering(956) 00:18:54.101 fused_ordering(957) 00:18:54.101 fused_ordering(958) 00:18:54.101 fused_ordering(959) 00:18:54.101 fused_ordering(960) 00:18:54.101 fused_ordering(961) 00:18:54.101 fused_ordering(962) 00:18:54.101 fused_ordering(963) 00:18:54.101 fused_ordering(964) 00:18:54.101 fused_ordering(965) 00:18:54.101 fused_ordering(966) 00:18:54.101 fused_ordering(967) 00:18:54.101 fused_ordering(968) 00:18:54.101 fused_ordering(969) 00:18:54.101 fused_ordering(970) 00:18:54.101 fused_ordering(971) 00:18:54.101 fused_ordering(972) 00:18:54.101 fused_ordering(973) 00:18:54.101 fused_ordering(974) 00:18:54.101 fused_ordering(975) 00:18:54.101 fused_ordering(976) 00:18:54.101 fused_ordering(977) 00:18:54.101 fused_ordering(978) 00:18:54.101 fused_ordering(979) 00:18:54.101 fused_ordering(980) 00:18:54.101 fused_ordering(981) 00:18:54.101 fused_ordering(982) 00:18:54.101 fused_ordering(983) 00:18:54.101 fused_ordering(984) 00:18:54.101 fused_ordering(985) 00:18:54.101 fused_ordering(986) 00:18:54.101 fused_ordering(987) 00:18:54.101 fused_ordering(988) 00:18:54.101 fused_ordering(989) 00:18:54.101 fused_ordering(990) 00:18:54.101 fused_ordering(991) 00:18:54.101 fused_ordering(992) 00:18:54.101 fused_ordering(993) 00:18:54.101 fused_ordering(994) 00:18:54.101 fused_ordering(995) 00:18:54.101 fused_ordering(996) 00:18:54.101 fused_ordering(997) 00:18:54.101 fused_ordering(998) 00:18:54.101 fused_ordering(999) 00:18:54.101 fused_ordering(1000) 00:18:54.101 fused_ordering(1001) 00:18:54.101 fused_ordering(1002) 00:18:54.101 fused_ordering(1003) 00:18:54.101 fused_ordering(1004) 00:18:54.101 fused_ordering(1005) 00:18:54.101 fused_ordering(1006) 00:18:54.101 fused_ordering(1007) 00:18:54.101 fused_ordering(1008) 00:18:54.101 fused_ordering(1009) 00:18:54.101 fused_ordering(1010) 00:18:54.101 fused_ordering(1011) 00:18:54.101 fused_ordering(1012) 00:18:54.101 fused_ordering(1013) 00:18:54.101 fused_ordering(1014) 00:18:54.101 fused_ordering(1015) 00:18:54.101 fused_ordering(1016) 00:18:54.101 fused_ordering(1017) 00:18:54.101 fused_ordering(1018) 00:18:54.101 fused_ordering(1019) 00:18:54.101 fused_ordering(1020) 00:18:54.101 fused_ordering(1021) 00:18:54.101 fused_ordering(1022) 00:18:54.101 fused_ordering(1023) 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:54.101 rmmod nvme_rdma 00:18:54.101 rmmod nvme_fabrics 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3251508 ']' 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3251508 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3251508 ']' 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3251508 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.101 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251508 00:18:54.361 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:54.361 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:54.361 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251508' 00:18:54.361 killing process with pid 3251508 00:18:54.361 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3251508 00:18:54.361 02:00:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3251508 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:55.742 00:18:55.742 real 0m9.423s 00:18:55.742 user 0m5.798s 00:18:55.742 sys 0m5.184s 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:55.742 ************************************ 00:18:55.742 END TEST nvmf_fused_ordering 00:18:55.742 ************************************ 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:55.742 02:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.743 ************************************ 00:18:55.743 START TEST nvmf_ns_masking 00:18:55.743 ************************************ 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:55.743 * Looking for test storage... 00:18:55.743 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.743 --rc genhtml_branch_coverage=1 00:18:55.743 --rc genhtml_function_coverage=1 00:18:55.743 --rc genhtml_legend=1 00:18:55.743 --rc geninfo_all_blocks=1 00:18:55.743 --rc geninfo_unexecuted_blocks=1 00:18:55.743 00:18:55.743 ' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.743 --rc genhtml_branch_coverage=1 00:18:55.743 --rc genhtml_function_coverage=1 00:18:55.743 --rc genhtml_legend=1 00:18:55.743 --rc geninfo_all_blocks=1 00:18:55.743 --rc geninfo_unexecuted_blocks=1 00:18:55.743 00:18:55.743 ' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.743 --rc genhtml_branch_coverage=1 00:18:55.743 --rc genhtml_function_coverage=1 00:18:55.743 --rc genhtml_legend=1 00:18:55.743 --rc geninfo_all_blocks=1 00:18:55.743 --rc geninfo_unexecuted_blocks=1 00:18:55.743 00:18:55.743 ' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.743 --rc genhtml_branch_coverage=1 00:18:55.743 --rc genhtml_function_coverage=1 00:18:55.743 --rc genhtml_legend=1 00:18:55.743 --rc geninfo_all_blocks=1 00:18:55.743 --rc geninfo_unexecuted_blocks=1 00:18:55.743 00:18:55.743 ' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.743 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.744 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ae067295-afc7-4be2-9499-34951c689bce 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=78df5944-5cf1-4ffa-b255-feebf3f5f9f9 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=31c904db-506b-4458-8f0c-a9e9e1e6e7de 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.744 02:00:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.364 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:19:02.365 Found 0000:18:00.0 (0x8086 - 0x159b) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:19:02.365 Found 0000:18:00.1 (0x8086 - 0x159b) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@403 -- # modinfo irdma 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:19:02.365 Found net devices under 0000:18:00.0: cvl_0_0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:19:02.365 Found net devices under 0000:18:00.1: cvl_0_1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # rdma_device_init 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:19:02.365 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:02.365 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:19:02.365 altname enp24s0f0np0 00:19:02.365 altname ens785f0np0 00:19:02.365 inet 192.168.100.8/24 scope global cvl_0_0 00:19:02.365 valid_lft forever preferred_lft forever 00:19:02.365 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:19:02.365 valid_lft forever preferred_lft forever 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:19:02.365 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:02.365 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:19:02.365 altname enp24s0f1np1 00:19:02.365 altname ens785f1np1 00:19:02.365 inet 192.168.100.9/24 scope global cvl_0_1 00:19:02.365 valid_lft forever preferred_lft forever 00:19:02.365 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:19:02.365 valid_lft forever preferred_lft forever 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:02.365 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:02.366 192.168.100.9' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # head -n 1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:02.366 192.168.100.9' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # head -n 1 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:02.366 192.168.100.9' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # tail -n +2 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3254805 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3254805 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3254805 ']' 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.366 02:00:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.366 [2024-10-09 02:00:21.751092] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:19:02.366 [2024-10-09 02:00:21.751194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.366 [2024-10-09 02:00:21.879988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.366 [2024-10-09 02:00:22.081585] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.366 [2024-10-09 02:00:22.081636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.366 [2024-10-09 02:00:22.081651] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.366 [2024-10-09 02:00:22.081665] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.366 [2024-10-09 02:00:22.081675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.366 [2024-10-09 02:00:22.083099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.935 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:03.194 [2024-10-09 02:00:22.793100] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:19:03.194 [2024-10-09 02:00:22.802654] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028b40/0x617000008340) succeed. 00:19:03.194 [2024-10-09 02:00:22.802690] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:19:03.194 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:03.194 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:03.194 02:00:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:03.453 Malloc1 00:19:03.453 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:03.712 Malloc2 00:19:03.712 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:03.972 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:03.972 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:04.232 [2024-10-09 02:00:23.931908] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:04.232 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:04.232 02:00:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 31c904db-506b-4458-8f0c-a9e9e1e6e7de -a 192.168.100.8 -s 4420 -i 4 00:19:04.491 02:00:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:04.491 02:00:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.491 02:00:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.491 02:00:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:04.491 02:00:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:06.395 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:06.396 [ 0]:0x1 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae427ff4389470a970c9f716f2ac2a3 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae427ff4389470a970c9f716f2ac2a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.396 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:06.655 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:06.655 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:06.655 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.655 [ 0]:0x1 00:19:06.655 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:06.655 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.914 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae427ff4389470a970c9f716f2ac2a3 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae427ff4389470a970c9f716f2ac2a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:06.915 [ 1]:0x2 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:06.915 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.174 02:00:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.434 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 31c904db-506b-4458-8f0c-a9e9e1e6e7de -a 192.168.100.8 -s 4420 -i 4 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:07.692 02:00:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:09.595 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:09.854 [ 0]:0x2 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:09.854 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:10.113 [ 0]:0x1 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae427ff4389470a970c9f716f2ac2a3 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae427ff4389470a970c9f716f2ac2a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:10.113 [ 1]:0x2 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.113 02:00:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:10.372 [ 0]:0x2 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:10.372 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:10.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.939 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:10.939 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:10.939 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 31c904db-506b-4458-8f0c-a9e9e1e6e7de -a 192.168.100.8 -s 4420 -i 4 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:11.197 02:00:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:13.102 [ 0]:0x1 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae427ff4389470a970c9f716f2ac2a3 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae427ff4389470a970c9f716f2ac2a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:13.102 [ 1]:0x2 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:13.102 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.361 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:13.361 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.361 02:00:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:13.361 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:13.620 [ 0]:0x2 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:19:13.620 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:13.879 [2024-10-09 02:00:33.441790] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:13.879 request: 00:19:13.879 { 00:19:13.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.879 "nsid": 2, 00:19:13.879 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.879 "method": "nvmf_ns_remove_host", 00:19:13.879 "req_id": 1 00:19:13.879 } 00:19:13.879 Got JSON-RPC error response 00:19:13.879 response: 00:19:13.879 { 00:19:13.879 "code": -32602, 00:19:13.879 "message": "Invalid parameters" 00:19:13.879 } 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:13.880 [ 0]:0x2 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051a9f46256b4151b12c032b5eb99604 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051a9f46256b4151b12c032b5eb99604 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:13.880 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3256546 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3256546 /var/tmp/host.sock 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3256546 ']' 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:14.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.139 02:00:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.398 [2024-10-09 02:00:34.003433] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:19:14.398 [2024-10-09 02:00:34.003535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256546 ] 00:19:14.398 [2024-10-09 02:00:34.129448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.657 [2024-10-09 02:00:34.331481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.597 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.597 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:15.597 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.597 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:15.856 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ae067295-afc7-4be2-9499-34951c689bce 00:19:15.856 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:19:15.856 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AE067295AFC74BE2949934951C689BCE -i 00:19:16.115 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 78df5944-5cf1-4ffa-b255-feebf3f5f9f9 00:19:16.116 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:19:16.116 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 78DF59445CF14FFAB255FEEBF3F5F9F9 -i 00:19:16.116 02:00:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:16.374 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:16.633 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:16.633 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:16.893 nvme0n1 00:19:16.893 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:16.893 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:17.152 nvme1n2 00:19:17.152 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:17.152 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:17.152 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:17.152 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:17.152 02:00:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:17.412 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:17.412 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:17.412 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:17.412 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ae067295-afc7-4be2-9499-34951c689bce == \a\e\0\6\7\2\9\5\-\a\f\c\7\-\4\b\e\2\-\9\4\9\9\-\3\4\9\5\1\c\6\8\9\b\c\e ]] 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 78df5944-5cf1-4ffa-b255-feebf3f5f9f9 == \7\8\d\f\5\9\4\4\-\5\c\f\1\-\4\f\f\a\-\b\2\5\5\-\f\e\e\b\f\3\f\5\f\9\f\9 ]] 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3256546 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3256546 ']' 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3256546 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.671 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3256546 00:19:17.931 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:17.931 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:17.931 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3256546' 00:19:17.931 killing process with pid 3256546 00:19:17.931 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3256546 00:19:17.931 02:00:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3256546 00:19:20.469 02:00:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:20.469 rmmod nvme_rdma 00:19:20.469 rmmod nvme_fabrics 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3254805 ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3254805 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3254805 ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3254805 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3254805 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3254805' 00:19:20.469 killing process with pid 3254805 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3254805 00:19:20.469 02:00:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3254805 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:22.379 00:19:22.379 real 0m26.595s 00:19:22.379 user 0m32.782s 00:19:22.379 sys 0m7.426s 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:22.379 ************************************ 00:19:22.379 END TEST nvmf_ns_masking 00:19:22.379 ************************************ 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.379 ************************************ 00:19:22.379 START TEST nvmf_nvme_cli 00:19:22.379 ************************************ 00:19:22.379 02:00:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:22.379 * Looking for test storage... 00:19:22.379 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:22.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.379 --rc genhtml_branch_coverage=1 00:19:22.379 --rc genhtml_function_coverage=1 00:19:22.379 --rc genhtml_legend=1 00:19:22.379 --rc geninfo_all_blocks=1 00:19:22.379 --rc geninfo_unexecuted_blocks=1 00:19:22.379 00:19:22.379 ' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:22.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.379 --rc genhtml_branch_coverage=1 00:19:22.379 --rc genhtml_function_coverage=1 00:19:22.379 --rc genhtml_legend=1 00:19:22.379 --rc geninfo_all_blocks=1 00:19:22.379 --rc geninfo_unexecuted_blocks=1 00:19:22.379 00:19:22.379 ' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:22.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.379 --rc genhtml_branch_coverage=1 00:19:22.379 --rc genhtml_function_coverage=1 00:19:22.379 --rc genhtml_legend=1 00:19:22.379 --rc geninfo_all_blocks=1 00:19:22.379 --rc geninfo_unexecuted_blocks=1 00:19:22.379 00:19:22.379 ' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:22.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.379 --rc genhtml_branch_coverage=1 00:19:22.379 --rc genhtml_function_coverage=1 00:19:22.379 --rc genhtml_legend=1 00:19:22.379 --rc geninfo_all_blocks=1 00:19:22.379 --rc geninfo_unexecuted_blocks=1 00:19:22.379 00:19:22.379 ' 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.379 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.380 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.380 02:00:42 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:19:28.955 Found 0000:18:00.0 (0x8086 - 0x159b) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:19:28.955 Found 0000:18:00.1 (0x8086 - 0x159b) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@403 -- # modinfo irdma 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:19:28.955 Found net devices under 0000:18:00.0: cvl_0_0 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:19:28.955 Found net devices under 0000:18:00.1: cvl_0_1 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # rdma_device_init 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.955 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:19:28.956 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:28.956 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:19:28.956 altname enp24s0f0np0 00:19:28.956 altname ens785f0np0 00:19:28.956 inet 192.168.100.8/24 scope global cvl_0_0 00:19:28.956 valid_lft forever preferred_lft forever 00:19:28.956 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:19:28.956 valid_lft forever preferred_lft forever 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:19:28.956 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:28.956 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:19:28.956 altname enp24s0f1np1 00:19:28.956 altname ens785f1np1 00:19:28.956 inet 192.168.100.9/24 scope global cvl_0_1 00:19:28.956 valid_lft forever preferred_lft forever 00:19:28.956 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:19:28.956 valid_lft forever preferred_lft forever 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.956 192.168.100.9' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:28.956 192.168.100.9' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # head -n 1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:28.956 192.168.100.9' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # tail -n +2 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # head -n 1 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:28.956 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3260537 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3260537 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3260537 ']' 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.216 02:00:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 [2024-10-09 02:00:48.870961] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:19:29.216 [2024-10-09 02:00:48.871068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.216 [2024-10-09 02:00:49.001723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.476 [2024-10-09 02:00:49.198523] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.476 [2024-10-09 02:00:49.198588] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.476 [2024-10-09 02:00:49.198618] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.476 [2024-10-09 02:00:49.198632] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.476 [2024-10-09 02:00:49.198642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.476 [2024-10-09 02:00:49.201074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.476 [2024-10-09 02:00:49.201144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.476 [2024-10-09 02:00:49.201235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.476 [2024-10-09 02:00:49.201242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.044 [2024-10-09 02:00:49.754145] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:19:30.044 [2024-10-09 02:00:49.764012] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:19:30.044 [2024-10-09 02:00:49.764050] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.044 Malloc0 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.044 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.303 Malloc1 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.303 [2024-10-09 02:00:49.960244] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:30.303 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.304 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:30.304 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.304 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.304 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.304 02:00:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -a 192.168.100.8 -s 4420 00:19:30.304 00:19:30.304 Discovery Log Number of Records 2, Generation counter 2 00:19:30.304 =====Discovery Log Entry 0====== 00:19:30.304 trtype: rdma 00:19:30.304 adrfam: ipv4 00:19:30.304 subtype: current discovery subsystem 00:19:30.304 treq: not required 00:19:30.304 portid: 0 00:19:30.304 trsvcid: 4420 00:19:30.304 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:30.304 traddr: 192.168.100.8 00:19:30.304 eflags: explicit discovery connections, duplicate discovery information 00:19:30.304 rdma_prtype: not specified 00:19:30.304 rdma_qptype: connected 00:19:30.304 rdma_cms: rdma-cm 00:19:30.304 rdma_pkey: 0x0000 00:19:30.304 =====Discovery Log Entry 1====== 00:19:30.304 trtype: rdma 00:19:30.304 adrfam: ipv4 00:19:30.304 subtype: nvme subsystem 00:19:30.304 treq: not required 00:19:30.304 portid: 0 00:19:30.304 trsvcid: 4420 00:19:30.304 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:30.304 traddr: 192.168.100.8 00:19:30.304 eflags: none 00:19:30.304 rdma_prtype: not specified 00:19:30.304 rdma_qptype: connected 00:19:30.304 rdma_cms: rdma-cm 00:19:30.304 rdma_pkey: 0x0000 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:30.304 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:30.563 02:00:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:33.098 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:33.099 /dev/nvme0n2 ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:33.099 02:00:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:33.667 rmmod nvme_rdma 00:19:33.667 rmmod nvme_fabrics 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3260537 ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3260537 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3260537 ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3260537 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3260537 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3260537' 00:19:33.667 killing process with pid 3260537 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3260537 00:19:33.667 02:00:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3260537 00:19:35.573 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:35.573 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:35.573 00:19:35.573 real 0m13.204s 00:19:35.573 user 0m24.347s 00:19:35.573 sys 0m5.961s 00:19:35.573 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.573 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.573 ************************************ 00:19:35.574 END TEST nvmf_nvme_cli 00:19:35.574 ************************************ 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.574 ************************************ 00:19:35.574 START TEST nvmf_auth_target 00:19:35.574 ************************************ 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:35.574 * Looking for test storage... 00:19:35.574 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:35.574 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:35.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.833 --rc genhtml_branch_coverage=1 00:19:35.833 --rc genhtml_function_coverage=1 00:19:35.833 --rc genhtml_legend=1 00:19:35.833 --rc geninfo_all_blocks=1 00:19:35.833 --rc geninfo_unexecuted_blocks=1 00:19:35.833 00:19:35.833 ' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:35.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.833 --rc genhtml_branch_coverage=1 00:19:35.833 --rc genhtml_function_coverage=1 00:19:35.833 --rc genhtml_legend=1 00:19:35.833 --rc geninfo_all_blocks=1 00:19:35.833 --rc geninfo_unexecuted_blocks=1 00:19:35.833 00:19:35.833 ' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:35.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.833 --rc genhtml_branch_coverage=1 00:19:35.833 --rc genhtml_function_coverage=1 00:19:35.833 --rc genhtml_legend=1 00:19:35.833 --rc geninfo_all_blocks=1 00:19:35.833 --rc geninfo_unexecuted_blocks=1 00:19:35.833 00:19:35.833 ' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:35.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.833 --rc genhtml_branch_coverage=1 00:19:35.833 --rc genhtml_function_coverage=1 00:19:35.833 --rc genhtml_legend=1 00:19:35.833 --rc geninfo_all_blocks=1 00:19:35.833 --rc geninfo_unexecuted_blocks=1 00:19:35.833 00:19:35.833 ' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.833 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.834 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.834 02:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.397 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:19:42.398 Found 0000:18:00.0 (0x8086 - 0x159b) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:19:42.398 Found 0000:18:00.1 (0x8086 - 0x159b) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@403 -- # modinfo irdma 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:19:42.398 Found net devices under 0000:18:00.0: cvl_0_0 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:19:42.398 Found net devices under 0000:18:00.1: cvl_0_1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # rdma_device_init 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:19:42.398 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:42.398 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:19:42.398 altname enp24s0f0np0 00:19:42.398 altname ens785f0np0 00:19:42.398 inet 192.168.100.8/24 scope global cvl_0_0 00:19:42.398 valid_lft forever preferred_lft forever 00:19:42.398 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:19:42.398 valid_lft forever preferred_lft forever 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:42.398 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:19:42.399 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:42.399 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:19:42.399 altname enp24s0f1np1 00:19:42.399 altname ens785f1np1 00:19:42.399 inet 192.168.100.9/24 scope global cvl_0_1 00:19:42.399 valid_lft forever preferred_lft forever 00:19:42.399 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:19:42.399 valid_lft forever preferred_lft forever 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:42.399 192.168.100.9' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:42.399 192.168.100.9' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # head -n 1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:42.399 192.168.100.9' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # tail -n +2 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # head -n 1 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3264220 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3264220 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3264220 ']' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.399 02:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3264396 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=372fb729922ba31121ea6205c1eb09196c1d62e00cfab671 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.UdU 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 372fb729922ba31121ea6205c1eb09196c1d62e00cfab671 0 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 372fb729922ba31121ea6205c1eb09196c1d62e00cfab671 0 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=372fb729922ba31121ea6205c1eb09196c1d62e00cfab671 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.UdU 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.UdU 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.UdU 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a33070e1ea50ea2b7c8fa7960b6d0281d09d8ef3974de45aa2b82dff8150576f 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Gp9 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a33070e1ea50ea2b7c8fa7960b6d0281d09d8ef3974de45aa2b82dff8150576f 3 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a33070e1ea50ea2b7c8fa7960b6d0281d09d8ef3974de45aa2b82dff8150576f 3 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a33070e1ea50ea2b7c8fa7960b6d0281d09d8ef3974de45aa2b82dff8150576f 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:42.659 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Gp9 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Gp9 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Gp9 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7ef4b383a0e6a74de89f64bb2e1ab6bf 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.A9t 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7ef4b383a0e6a74de89f64bb2e1ab6bf 1 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7ef4b383a0e6a74de89f64bb2e1ab6bf 1 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7ef4b383a0e6a74de89f64bb2e1ab6bf 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.A9t 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.A9t 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.A9t 00:19:42.919 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0964d452e438e164641bae544a5f349fda05173a1846b5a9 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6ne 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0964d452e438e164641bae544a5f349fda05173a1846b5a9 2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0964d452e438e164641bae544a5f349fda05173a1846b5a9 2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0964d452e438e164641bae544a5f349fda05173a1846b5a9 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6ne 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6ne 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.6ne 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a6ba7e8b3090ed0f89a49051dd960f7fb2031919948a71e9 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Nfn 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a6ba7e8b3090ed0f89a49051dd960f7fb2031919948a71e9 2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a6ba7e8b3090ed0f89a49051dd960f7fb2031919948a71e9 2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a6ba7e8b3090ed0f89a49051dd960f7fb2031919948a71e9 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Nfn 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Nfn 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Nfn 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cf59ad9ae42c9e1f506bb3baca7ece81 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.GfP 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cf59ad9ae42c9e1f506bb3baca7ece81 1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cf59ad9ae42c9e1f506bb3baca7ece81 1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cf59ad9ae42c9e1f506bb3baca7ece81 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:42.920 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.GfP 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.GfP 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.GfP 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:43.179 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6b829c7fc47a06b02338be37351c62a7b6d8d9c3bc02e641d4694e5e0a5b4bd5 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.YfU 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6b829c7fc47a06b02338be37351c62a7b6d8d9c3bc02e641d4694e5e0a5b4bd5 3 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6b829c7fc47a06b02338be37351c62a7b6d8d9c3bc02e641d4694e5e0a5b4bd5 3 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6b829c7fc47a06b02338be37351c62a7b6d8d9c3bc02e641d4694e5e0a5b4bd5 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.YfU 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.YfU 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.YfU 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3264220 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3264220 ']' 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.180 02:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.439 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.439 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3264396 /var/tmp/host.sock 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3264396 ']' 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:43.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.440 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.007 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UdU 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UdU 00:19:44.008 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UdU 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Gp9 ]] 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gp9 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.266 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gp9 00:19:44.267 02:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gp9 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A9t 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.A9t 00:19:44.267 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.A9t 00:19:44.525 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.6ne ]] 00:19:44.525 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ne 00:19:44.525 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.525 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.526 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.526 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ne 00:19:44.526 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ne 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nfn 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Nfn 00:19:44.784 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Nfn 00:19:45.043 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.GfP ]] 00:19:45.043 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GfP 00:19:45.043 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.043 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.043 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.044 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GfP 00:19:45.044 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GfP 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YfU 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.YfU 00:19:45.303 02:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.YfU 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.303 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.562 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.822 00:19:45.822 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.822 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.822 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.081 { 00:19:46.081 "cntlid": 1, 00:19:46.081 "qid": 0, 00:19:46.081 "state": "enabled", 00:19:46.081 "thread": "nvmf_tgt_poll_group_000", 00:19:46.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:46.081 "listen_address": { 00:19:46.081 "trtype": "RDMA", 00:19:46.081 "adrfam": "IPv4", 00:19:46.081 "traddr": "192.168.100.8", 00:19:46.081 "trsvcid": "4420" 00:19:46.081 }, 00:19:46.081 "peer_address": { 00:19:46.081 "trtype": "RDMA", 00:19:46.081 "adrfam": "IPv4", 00:19:46.081 "traddr": "192.168.100.8", 00:19:46.081 "trsvcid": "57805" 00:19:46.081 }, 00:19:46.081 "auth": { 00:19:46.081 "state": "completed", 00:19:46.081 "digest": "sha256", 00:19:46.081 "dhgroup": "null" 00:19:46.081 } 00:19:46.081 } 00:19:46.081 ]' 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.081 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.340 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.340 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.340 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.340 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.340 02:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.598 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:19:46.598 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.167 02:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.426 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:47.426 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.426 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.426 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.427 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.686 00:19:47.686 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.686 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.686 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.945 { 00:19:47.945 "cntlid": 3, 00:19:47.945 "qid": 0, 00:19:47.945 "state": "enabled", 00:19:47.945 "thread": "nvmf_tgt_poll_group_000", 00:19:47.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:47.945 "listen_address": { 00:19:47.945 "trtype": "RDMA", 00:19:47.945 "adrfam": "IPv4", 00:19:47.945 "traddr": "192.168.100.8", 00:19:47.945 "trsvcid": "4420" 00:19:47.945 }, 00:19:47.945 "peer_address": { 00:19:47.945 "trtype": "RDMA", 00:19:47.945 "adrfam": "IPv4", 00:19:47.945 "traddr": "192.168.100.8", 00:19:47.945 "trsvcid": "33082" 00:19:47.945 }, 00:19:47.945 "auth": { 00:19:47.945 "state": "completed", 00:19:47.945 "digest": "sha256", 00:19:47.945 "dhgroup": "null" 00:19:47.945 } 00:19:47.945 } 00:19:47.945 ]' 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.945 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.204 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:19:48.204 02:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:19:48.773 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.032 02:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.291 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.550 { 00:19:49.550 "cntlid": 5, 00:19:49.550 "qid": 0, 00:19:49.550 "state": "enabled", 00:19:49.550 "thread": "nvmf_tgt_poll_group_000", 00:19:49.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:49.550 "listen_address": { 00:19:49.550 "trtype": "RDMA", 00:19:49.550 "adrfam": "IPv4", 00:19:49.550 "traddr": "192.168.100.8", 00:19:49.550 "trsvcid": "4420" 00:19:49.550 }, 00:19:49.550 "peer_address": { 00:19:49.550 "trtype": "RDMA", 00:19:49.550 "adrfam": "IPv4", 00:19:49.550 "traddr": "192.168.100.8", 00:19:49.550 "trsvcid": "59984" 00:19:49.550 }, 00:19:49.550 "auth": { 00:19:49.550 "state": "completed", 00:19:49.550 "digest": "sha256", 00:19:49.550 "dhgroup": "null" 00:19:49.550 } 00:19:49.550 } 00:19:49.550 ]' 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.550 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.809 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.809 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.809 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.809 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.809 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.068 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:19:50.068 02:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.637 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.896 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.155 00:19:51.155 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.155 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.155 02:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.417 { 00:19:51.417 "cntlid": 7, 00:19:51.417 "qid": 0, 00:19:51.417 "state": "enabled", 00:19:51.417 "thread": "nvmf_tgt_poll_group_000", 00:19:51.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:51.417 "listen_address": { 00:19:51.417 "trtype": "RDMA", 00:19:51.417 "adrfam": "IPv4", 00:19:51.417 "traddr": "192.168.100.8", 00:19:51.417 "trsvcid": "4420" 00:19:51.417 }, 00:19:51.417 "peer_address": { 00:19:51.417 "trtype": "RDMA", 00:19:51.417 "adrfam": "IPv4", 00:19:51.417 "traddr": "192.168.100.8", 00:19:51.417 "trsvcid": "42284" 00:19:51.417 }, 00:19:51.417 "auth": { 00:19:51.417 "state": "completed", 00:19:51.417 "digest": "sha256", 00:19:51.417 "dhgroup": "null" 00:19:51.417 } 00:19:51.417 } 00:19:51.417 ]' 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.417 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.704 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:19:51.704 02:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.375 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.634 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.635 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.635 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.894 00:19:52.894 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.894 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.894 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.153 { 00:19:53.153 "cntlid": 9, 00:19:53.153 "qid": 0, 00:19:53.153 "state": "enabled", 00:19:53.153 "thread": "nvmf_tgt_poll_group_000", 00:19:53.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:53.153 "listen_address": { 00:19:53.153 "trtype": "RDMA", 00:19:53.153 "adrfam": "IPv4", 00:19:53.153 "traddr": "192.168.100.8", 00:19:53.153 "trsvcid": "4420" 00:19:53.153 }, 00:19:53.153 "peer_address": { 00:19:53.153 "trtype": "RDMA", 00:19:53.153 "adrfam": "IPv4", 00:19:53.153 "traddr": "192.168.100.8", 00:19:53.153 "trsvcid": "42679" 00:19:53.153 }, 00:19:53.153 "auth": { 00:19:53.153 "state": "completed", 00:19:53.153 "digest": "sha256", 00:19:53.153 "dhgroup": "ffdhe2048" 00:19:53.153 } 00:19:53.153 } 00:19:53.153 ]' 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.153 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.412 02:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.412 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.412 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.412 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:19:53.412 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:19:54.349 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.349 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.350 02:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.609 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.868 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.868 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.868 { 00:19:54.868 "cntlid": 11, 00:19:54.869 "qid": 0, 00:19:54.869 "state": "enabled", 00:19:54.869 "thread": "nvmf_tgt_poll_group_000", 00:19:54.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:54.869 "listen_address": { 00:19:54.869 "trtype": "RDMA", 00:19:54.869 "adrfam": "IPv4", 00:19:54.869 "traddr": "192.168.100.8", 00:19:54.869 "trsvcid": "4420" 00:19:54.869 }, 00:19:54.869 "peer_address": { 00:19:54.869 "trtype": "RDMA", 00:19:54.869 "adrfam": "IPv4", 00:19:54.869 "traddr": "192.168.100.8", 00:19:54.869 "trsvcid": "40724" 00:19:54.869 }, 00:19:54.869 "auth": { 00:19:54.869 "state": "completed", 00:19:54.869 "digest": "sha256", 00:19:54.869 "dhgroup": "ffdhe2048" 00:19:54.869 } 00:19:54.869 } 00:19:54.869 ]' 00:19:54.869 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.128 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.387 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:19:55.387 02:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.955 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.213 02:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.472 00:19:56.472 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.472 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.472 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.732 { 00:19:56.732 "cntlid": 13, 00:19:56.732 "qid": 0, 00:19:56.732 "state": "enabled", 00:19:56.732 "thread": "nvmf_tgt_poll_group_000", 00:19:56.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:56.732 "listen_address": { 00:19:56.732 "trtype": "RDMA", 00:19:56.732 "adrfam": "IPv4", 00:19:56.732 "traddr": "192.168.100.8", 00:19:56.732 "trsvcid": "4420" 00:19:56.732 }, 00:19:56.732 "peer_address": { 00:19:56.732 "trtype": "RDMA", 00:19:56.732 "adrfam": "IPv4", 00:19:56.732 "traddr": "192.168.100.8", 00:19:56.732 "trsvcid": "43316" 00:19:56.732 }, 00:19:56.732 "auth": { 00:19:56.732 "state": "completed", 00:19:56.732 "digest": "sha256", 00:19:56.732 "dhgroup": "ffdhe2048" 00:19:56.732 } 00:19:56.732 } 00:19:56.732 ]' 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.732 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.991 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:19:56.991 02:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:19:57.559 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.819 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.078 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.337 00:19:58.337 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.337 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.337 02:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.337 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.337 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.338 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.338 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.338 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.338 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.338 { 00:19:58.338 "cntlid": 15, 00:19:58.338 "qid": 0, 00:19:58.338 "state": "enabled", 00:19:58.338 "thread": "nvmf_tgt_poll_group_000", 00:19:58.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:19:58.338 "listen_address": { 00:19:58.338 "trtype": "RDMA", 00:19:58.338 "adrfam": "IPv4", 00:19:58.338 "traddr": "192.168.100.8", 00:19:58.338 "trsvcid": "4420" 00:19:58.338 }, 00:19:58.338 "peer_address": { 00:19:58.338 "trtype": "RDMA", 00:19:58.338 "adrfam": "IPv4", 00:19:58.338 "traddr": "192.168.100.8", 00:19:58.338 "trsvcid": "40814" 00:19:58.338 }, 00:19:58.338 "auth": { 00:19:58.338 "state": "completed", 00:19:58.338 "digest": "sha256", 00:19:58.338 "dhgroup": "ffdhe2048" 00:19:58.338 } 00:19:58.338 } 00:19:58.338 ]' 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.596 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.855 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:19:58.855 02:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.424 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.683 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.942 00:19:59.942 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.942 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.942 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.201 { 00:20:00.201 "cntlid": 17, 00:20:00.201 "qid": 0, 00:20:00.201 "state": "enabled", 00:20:00.201 "thread": "nvmf_tgt_poll_group_000", 00:20:00.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:00.201 "listen_address": { 00:20:00.201 "trtype": "RDMA", 00:20:00.201 "adrfam": "IPv4", 00:20:00.201 "traddr": "192.168.100.8", 00:20:00.201 "trsvcid": "4420" 00:20:00.201 }, 00:20:00.201 "peer_address": { 00:20:00.201 "trtype": "RDMA", 00:20:00.201 "adrfam": "IPv4", 00:20:00.201 "traddr": "192.168.100.8", 00:20:00.201 "trsvcid": "43223" 00:20:00.201 }, 00:20:00.201 "auth": { 00:20:00.201 "state": "completed", 00:20:00.201 "digest": "sha256", 00:20:00.201 "dhgroup": "ffdhe3072" 00:20:00.201 } 00:20:00.201 } 00:20:00.201 ]' 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.201 02:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.460 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.460 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.460 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.460 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:00.460 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:01.394 02:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.394 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.652 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.910 00:20:01.910 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.910 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.910 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.911 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.911 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.911 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.911 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.169 { 00:20:02.169 "cntlid": 19, 00:20:02.169 "qid": 0, 00:20:02.169 "state": "enabled", 00:20:02.169 "thread": "nvmf_tgt_poll_group_000", 00:20:02.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:02.169 "listen_address": { 00:20:02.169 "trtype": "RDMA", 00:20:02.169 "adrfam": "IPv4", 00:20:02.169 "traddr": "192.168.100.8", 00:20:02.169 "trsvcid": "4420" 00:20:02.169 }, 00:20:02.169 "peer_address": { 00:20:02.169 "trtype": "RDMA", 00:20:02.169 "adrfam": "IPv4", 00:20:02.169 "traddr": "192.168.100.8", 00:20:02.169 "trsvcid": "37081" 00:20:02.169 }, 00:20:02.169 "auth": { 00:20:02.169 "state": "completed", 00:20:02.169 "digest": "sha256", 00:20:02.169 "dhgroup": "ffdhe3072" 00:20:02.169 } 00:20:02.169 } 00:20:02.169 ]' 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.169 02:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.428 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:02.428 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.994 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.252 02:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.510 00:20:03.510 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.510 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.510 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.770 { 00:20:03.770 "cntlid": 21, 00:20:03.770 "qid": 0, 00:20:03.770 "state": "enabled", 00:20:03.770 "thread": "nvmf_tgt_poll_group_000", 00:20:03.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:03.770 "listen_address": { 00:20:03.770 "trtype": "RDMA", 00:20:03.770 "adrfam": "IPv4", 00:20:03.770 "traddr": "192.168.100.8", 00:20:03.770 "trsvcid": "4420" 00:20:03.770 }, 00:20:03.770 "peer_address": { 00:20:03.770 "trtype": "RDMA", 00:20:03.770 "adrfam": "IPv4", 00:20:03.770 "traddr": "192.168.100.8", 00:20:03.770 "trsvcid": "53620" 00:20:03.770 }, 00:20:03.770 "auth": { 00:20:03.770 "state": "completed", 00:20:03.770 "digest": "sha256", 00:20:03.770 "dhgroup": "ffdhe3072" 00:20:03.770 } 00:20:03.770 } 00:20:03.770 ]' 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.770 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.028 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:04.028 02:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:04.595 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.853 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:05.111 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.112 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.112 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.112 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.112 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.112 02:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.370 00:20:05.370 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.370 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.370 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.628 { 00:20:05.628 "cntlid": 23, 00:20:05.628 "qid": 0, 00:20:05.628 "state": "enabled", 00:20:05.628 "thread": "nvmf_tgt_poll_group_000", 00:20:05.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:05.628 "listen_address": { 00:20:05.628 "trtype": "RDMA", 00:20:05.628 "adrfam": "IPv4", 00:20:05.628 "traddr": "192.168.100.8", 00:20:05.628 "trsvcid": "4420" 00:20:05.628 }, 00:20:05.628 "peer_address": { 00:20:05.628 "trtype": "RDMA", 00:20:05.628 "adrfam": "IPv4", 00:20:05.628 "traddr": "192.168.100.8", 00:20:05.628 "trsvcid": "54400" 00:20:05.628 }, 00:20:05.628 "auth": { 00:20:05.628 "state": "completed", 00:20:05.628 "digest": "sha256", 00:20:05.628 "dhgroup": "ffdhe3072" 00:20:05.628 } 00:20:05.628 } 00:20:05.628 ]' 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.628 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.887 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:05.887 02:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:06.453 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.712 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.971 00:20:07.229 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.229 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.229 02:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.229 { 00:20:07.229 "cntlid": 25, 00:20:07.229 "qid": 0, 00:20:07.229 "state": "enabled", 00:20:07.229 "thread": "nvmf_tgt_poll_group_000", 00:20:07.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:07.229 "listen_address": { 00:20:07.229 "trtype": "RDMA", 00:20:07.229 "adrfam": "IPv4", 00:20:07.229 "traddr": "192.168.100.8", 00:20:07.229 "trsvcid": "4420" 00:20:07.229 }, 00:20:07.229 "peer_address": { 00:20:07.229 "trtype": "RDMA", 00:20:07.229 "adrfam": "IPv4", 00:20:07.229 "traddr": "192.168.100.8", 00:20:07.229 "trsvcid": "45552" 00:20:07.229 }, 00:20:07.229 "auth": { 00:20:07.229 "state": "completed", 00:20:07.229 "digest": "sha256", 00:20:07.229 "dhgroup": "ffdhe4096" 00:20:07.229 } 00:20:07.229 } 00:20:07.229 ]' 00:20:07.229 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.487 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.488 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.746 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:07.746 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:08.312 02:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.312 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.571 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.830 00:20:08.830 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.830 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.830 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.088 { 00:20:09.088 "cntlid": 27, 00:20:09.088 "qid": 0, 00:20:09.088 "state": "enabled", 00:20:09.088 "thread": "nvmf_tgt_poll_group_000", 00:20:09.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:09.088 "listen_address": { 00:20:09.088 "trtype": "RDMA", 00:20:09.088 "adrfam": "IPv4", 00:20:09.088 "traddr": "192.168.100.8", 00:20:09.088 "trsvcid": "4420" 00:20:09.088 }, 00:20:09.088 "peer_address": { 00:20:09.088 "trtype": "RDMA", 00:20:09.088 "adrfam": "IPv4", 00:20:09.088 "traddr": "192.168.100.8", 00:20:09.088 "trsvcid": "36666" 00:20:09.088 }, 00:20:09.088 "auth": { 00:20:09.088 "state": "completed", 00:20:09.088 "digest": "sha256", 00:20:09.088 "dhgroup": "ffdhe4096" 00:20:09.088 } 00:20:09.088 } 00:20:09.088 ]' 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.088 02:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.346 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:09.346 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:09.912 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.171 02:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.429 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.688 00:20:10.688 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.688 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.688 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.947 { 00:20:10.947 "cntlid": 29, 00:20:10.947 "qid": 0, 00:20:10.947 "state": "enabled", 00:20:10.947 "thread": "nvmf_tgt_poll_group_000", 00:20:10.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:10.947 "listen_address": { 00:20:10.947 "trtype": "RDMA", 00:20:10.947 "adrfam": "IPv4", 00:20:10.947 "traddr": "192.168.100.8", 00:20:10.947 "trsvcid": "4420" 00:20:10.947 }, 00:20:10.947 "peer_address": { 00:20:10.947 "trtype": "RDMA", 00:20:10.947 "adrfam": "IPv4", 00:20:10.947 "traddr": "192.168.100.8", 00:20:10.947 "trsvcid": "38487" 00:20:10.947 }, 00:20:10.947 "auth": { 00:20:10.947 "state": "completed", 00:20:10.947 "digest": "sha256", 00:20:10.947 "dhgroup": "ffdhe4096" 00:20:10.947 } 00:20:10.947 } 00:20:10.947 ]' 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.947 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.206 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:11.206 02:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:11.773 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.032 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.291 02:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.550 00:20:12.550 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.550 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.550 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.809 { 00:20:12.809 "cntlid": 31, 00:20:12.809 "qid": 0, 00:20:12.809 "state": "enabled", 00:20:12.809 "thread": "nvmf_tgt_poll_group_000", 00:20:12.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:12.809 "listen_address": { 00:20:12.809 "trtype": "RDMA", 00:20:12.809 "adrfam": "IPv4", 00:20:12.809 "traddr": "192.168.100.8", 00:20:12.809 "trsvcid": "4420" 00:20:12.809 }, 00:20:12.809 "peer_address": { 00:20:12.809 "trtype": "RDMA", 00:20:12.809 "adrfam": "IPv4", 00:20:12.809 "traddr": "192.168.100.8", 00:20:12.809 "trsvcid": "48608" 00:20:12.809 }, 00:20:12.809 "auth": { 00:20:12.809 "state": "completed", 00:20:12.809 "digest": "sha256", 00:20:12.809 "dhgroup": "ffdhe4096" 00:20:12.809 } 00:20:12.809 } 00:20:12.809 ]' 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.809 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.068 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:13.068 02:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.636 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.895 02:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.154 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.413 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.413 { 00:20:14.413 "cntlid": 33, 00:20:14.413 "qid": 0, 00:20:14.413 "state": "enabled", 00:20:14.413 "thread": "nvmf_tgt_poll_group_000", 00:20:14.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:14.413 "listen_address": { 00:20:14.413 "trtype": "RDMA", 00:20:14.413 "adrfam": "IPv4", 00:20:14.413 "traddr": "192.168.100.8", 00:20:14.413 "trsvcid": "4420" 00:20:14.413 }, 00:20:14.413 "peer_address": { 00:20:14.413 "trtype": "RDMA", 00:20:14.413 "adrfam": "IPv4", 00:20:14.413 "traddr": "192.168.100.8", 00:20:14.413 "trsvcid": "39048" 00:20:14.413 }, 00:20:14.413 "auth": { 00:20:14.413 "state": "completed", 00:20:14.413 "digest": "sha256", 00:20:14.413 "dhgroup": "ffdhe6144" 00:20:14.413 } 00:20:14.413 } 00:20:14.413 ]' 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.671 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.929 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:14.929 02:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:15.497 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.497 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:15.497 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.497 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.755 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.321 00:20:16.321 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.321 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.321 02:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.321 { 00:20:16.321 "cntlid": 35, 00:20:16.321 "qid": 0, 00:20:16.321 "state": "enabled", 00:20:16.321 "thread": "nvmf_tgt_poll_group_000", 00:20:16.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:16.321 "listen_address": { 00:20:16.321 "trtype": "RDMA", 00:20:16.321 "adrfam": "IPv4", 00:20:16.321 "traddr": "192.168.100.8", 00:20:16.321 "trsvcid": "4420" 00:20:16.321 }, 00:20:16.321 "peer_address": { 00:20:16.321 "trtype": "RDMA", 00:20:16.321 "adrfam": "IPv4", 00:20:16.321 "traddr": "192.168.100.8", 00:20:16.321 "trsvcid": "54548" 00:20:16.321 }, 00:20:16.321 "auth": { 00:20:16.321 "state": "completed", 00:20:16.321 "digest": "sha256", 00:20:16.321 "dhgroup": "ffdhe6144" 00:20:16.321 } 00:20:16.321 } 00:20:16.321 ]' 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.321 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.579 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.579 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.579 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.579 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.579 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.838 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:16.838 02:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.404 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.662 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.920 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.179 { 00:20:18.179 "cntlid": 37, 00:20:18.179 "qid": 0, 00:20:18.179 "state": "enabled", 00:20:18.179 "thread": "nvmf_tgt_poll_group_000", 00:20:18.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:18.179 "listen_address": { 00:20:18.179 "trtype": "RDMA", 00:20:18.179 "adrfam": "IPv4", 00:20:18.179 "traddr": "192.168.100.8", 00:20:18.179 "trsvcid": "4420" 00:20:18.179 }, 00:20:18.179 "peer_address": { 00:20:18.179 "trtype": "RDMA", 00:20:18.179 "adrfam": "IPv4", 00:20:18.179 "traddr": "192.168.100.8", 00:20:18.179 "trsvcid": "33066" 00:20:18.179 }, 00:20:18.179 "auth": { 00:20:18.179 "state": "completed", 00:20:18.179 "digest": "sha256", 00:20:18.179 "dhgroup": "ffdhe6144" 00:20:18.179 } 00:20:18.179 } 00:20:18.179 ]' 00:20:18.179 02:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.437 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.695 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:18.695 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:19.262 02:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.262 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.521 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.087 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.087 { 00:20:20.087 "cntlid": 39, 00:20:20.087 "qid": 0, 00:20:20.087 "state": "enabled", 00:20:20.087 "thread": "nvmf_tgt_poll_group_000", 00:20:20.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:20.087 "listen_address": { 00:20:20.087 "trtype": "RDMA", 00:20:20.087 "adrfam": "IPv4", 00:20:20.087 "traddr": "192.168.100.8", 00:20:20.087 "trsvcid": "4420" 00:20:20.087 }, 00:20:20.087 "peer_address": { 00:20:20.087 "trtype": "RDMA", 00:20:20.087 "adrfam": "IPv4", 00:20:20.087 "traddr": "192.168.100.8", 00:20:20.087 "trsvcid": "54974" 00:20:20.087 }, 00:20:20.087 "auth": { 00:20:20.087 "state": "completed", 00:20:20.087 "digest": "sha256", 00:20:20.087 "dhgroup": "ffdhe6144" 00:20:20.087 } 00:20:20.087 } 00:20:20.087 ]' 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.087 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.345 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.345 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.345 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.345 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.345 02:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.603 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:20.603 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:21.169 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.170 02:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.428 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.995 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.995 { 00:20:21.995 "cntlid": 41, 00:20:21.995 "qid": 0, 00:20:21.995 "state": "enabled", 00:20:21.995 "thread": "nvmf_tgt_poll_group_000", 00:20:21.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:21.995 "listen_address": { 00:20:21.995 "trtype": "RDMA", 00:20:21.995 "adrfam": "IPv4", 00:20:21.995 "traddr": "192.168.100.8", 00:20:21.995 "trsvcid": "4420" 00:20:21.995 }, 00:20:21.995 "peer_address": { 00:20:21.995 "trtype": "RDMA", 00:20:21.995 "adrfam": "IPv4", 00:20:21.995 "traddr": "192.168.100.8", 00:20:21.995 "trsvcid": "39951" 00:20:21.995 }, 00:20:21.995 "auth": { 00:20:21.995 "state": "completed", 00:20:21.995 "digest": "sha256", 00:20:21.995 "dhgroup": "ffdhe8192" 00:20:21.995 } 00:20:21.995 } 00:20:21.995 ]' 00:20:21.995 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.253 02:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.512 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:22.512 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.078 02:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.337 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.904 00:20:23.904 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.904 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.904 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.163 { 00:20:24.163 "cntlid": 43, 00:20:24.163 "qid": 0, 00:20:24.163 "state": "enabled", 00:20:24.163 "thread": "nvmf_tgt_poll_group_000", 00:20:24.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:24.163 "listen_address": { 00:20:24.163 "trtype": "RDMA", 00:20:24.163 "adrfam": "IPv4", 00:20:24.163 "traddr": "192.168.100.8", 00:20:24.163 "trsvcid": "4420" 00:20:24.163 }, 00:20:24.163 "peer_address": { 00:20:24.163 "trtype": "RDMA", 00:20:24.163 "adrfam": "IPv4", 00:20:24.163 "traddr": "192.168.100.8", 00:20:24.163 "trsvcid": "40358" 00:20:24.163 }, 00:20:24.163 "auth": { 00:20:24.163 "state": "completed", 00:20:24.163 "digest": "sha256", 00:20:24.163 "dhgroup": "ffdhe8192" 00:20:24.163 } 00:20:24.163 } 00:20:24.163 ]' 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.163 02:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.422 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:24.422 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:24.990 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.249 02:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.508 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.767 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.027 { 00:20:26.027 "cntlid": 45, 00:20:26.027 "qid": 0, 00:20:26.027 "state": "enabled", 00:20:26.027 "thread": "nvmf_tgt_poll_group_000", 00:20:26.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:26.027 "listen_address": { 00:20:26.027 "trtype": "RDMA", 00:20:26.027 "adrfam": "IPv4", 00:20:26.027 "traddr": "192.168.100.8", 00:20:26.027 "trsvcid": "4420" 00:20:26.027 }, 00:20:26.027 "peer_address": { 00:20:26.027 "trtype": "RDMA", 00:20:26.027 "adrfam": "IPv4", 00:20:26.027 "traddr": "192.168.100.8", 00:20:26.027 "trsvcid": "44588" 00:20:26.027 }, 00:20:26.027 "auth": { 00:20:26.027 "state": "completed", 00:20:26.027 "digest": "sha256", 00:20:26.027 "dhgroup": "ffdhe8192" 00:20:26.027 } 00:20:26.027 } 00:20:26.027 ]' 00:20:26.027 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.286 02:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.544 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:26.544 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.111 02:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.376 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.944 00:20:27.944 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.944 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.944 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.203 { 00:20:28.203 "cntlid": 47, 00:20:28.203 "qid": 0, 00:20:28.203 "state": "enabled", 00:20:28.203 "thread": "nvmf_tgt_poll_group_000", 00:20:28.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:28.203 "listen_address": { 00:20:28.203 "trtype": "RDMA", 00:20:28.203 "adrfam": "IPv4", 00:20:28.203 "traddr": "192.168.100.8", 00:20:28.203 "trsvcid": "4420" 00:20:28.203 }, 00:20:28.203 "peer_address": { 00:20:28.203 "trtype": "RDMA", 00:20:28.203 "adrfam": "IPv4", 00:20:28.203 "traddr": "192.168.100.8", 00:20:28.203 "trsvcid": "41779" 00:20:28.203 }, 00:20:28.203 "auth": { 00:20:28.203 "state": "completed", 00:20:28.203 "digest": "sha256", 00:20:28.203 "dhgroup": "ffdhe8192" 00:20:28.203 } 00:20:28.203 } 00:20:28.203 ]' 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.203 02:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.463 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:28.463 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:29.030 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.030 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:29.030 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.030 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.030 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.031 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:29.031 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.031 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.031 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.031 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.291 02:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.291 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.291 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.291 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.291 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.601 00:20:29.601 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.601 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.601 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.937 { 00:20:29.937 "cntlid": 49, 00:20:29.937 "qid": 0, 00:20:29.937 "state": "enabled", 00:20:29.937 "thread": "nvmf_tgt_poll_group_000", 00:20:29.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:29.937 "listen_address": { 00:20:29.937 "trtype": "RDMA", 00:20:29.937 "adrfam": "IPv4", 00:20:29.937 "traddr": "192.168.100.8", 00:20:29.937 "trsvcid": "4420" 00:20:29.937 }, 00:20:29.937 "peer_address": { 00:20:29.937 "trtype": "RDMA", 00:20:29.937 "adrfam": "IPv4", 00:20:29.937 "traddr": "192.168.100.8", 00:20:29.937 "trsvcid": "40526" 00:20:29.937 }, 00:20:29.937 "auth": { 00:20:29.937 "state": "completed", 00:20:29.937 "digest": "sha384", 00:20:29.937 "dhgroup": "null" 00:20:29.937 } 00:20:29.937 } 00:20:29.937 ]' 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.937 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.218 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:30.218 02:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.786 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.787 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.046 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.305 00:20:31.305 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.305 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.305 02:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.565 { 00:20:31.565 "cntlid": 51, 00:20:31.565 "qid": 0, 00:20:31.565 "state": "enabled", 00:20:31.565 "thread": "nvmf_tgt_poll_group_000", 00:20:31.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:31.565 "listen_address": { 00:20:31.565 "trtype": "RDMA", 00:20:31.565 "adrfam": "IPv4", 00:20:31.565 "traddr": "192.168.100.8", 00:20:31.565 "trsvcid": "4420" 00:20:31.565 }, 00:20:31.565 "peer_address": { 00:20:31.565 "trtype": "RDMA", 00:20:31.565 "adrfam": "IPv4", 00:20:31.565 "traddr": "192.168.100.8", 00:20:31.565 "trsvcid": "54018" 00:20:31.565 }, 00:20:31.565 "auth": { 00:20:31.565 "state": "completed", 00:20:31.565 "digest": "sha384", 00:20:31.565 "dhgroup": "null" 00:20:31.565 } 00:20:31.565 } 00:20:31.565 ]' 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.565 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.824 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:31.824 02:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:32.391 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.651 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.910 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.169 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.169 { 00:20:33.169 "cntlid": 53, 00:20:33.169 "qid": 0, 00:20:33.169 "state": "enabled", 00:20:33.169 "thread": "nvmf_tgt_poll_group_000", 00:20:33.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:33.169 "listen_address": { 00:20:33.169 "trtype": "RDMA", 00:20:33.169 "adrfam": "IPv4", 00:20:33.169 "traddr": "192.168.100.8", 00:20:33.169 "trsvcid": "4420" 00:20:33.169 }, 00:20:33.169 "peer_address": { 00:20:33.169 "trtype": "RDMA", 00:20:33.169 "adrfam": "IPv4", 00:20:33.169 "traddr": "192.168.100.8", 00:20:33.169 "trsvcid": "46517" 00:20:33.169 }, 00:20:33.169 "auth": { 00:20:33.169 "state": "completed", 00:20:33.169 "digest": "sha384", 00:20:33.169 "dhgroup": "null" 00:20:33.169 } 00:20:33.169 } 00:20:33.169 ]' 00:20:33.169 02:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.429 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.688 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:33.688 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:34.256 02:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.256 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.515 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.774 00:20:34.774 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.774 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.774 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.034 { 00:20:35.034 "cntlid": 55, 00:20:35.034 "qid": 0, 00:20:35.034 "state": "enabled", 00:20:35.034 "thread": "nvmf_tgt_poll_group_000", 00:20:35.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:35.034 "listen_address": { 00:20:35.034 "trtype": "RDMA", 00:20:35.034 "adrfam": "IPv4", 00:20:35.034 "traddr": "192.168.100.8", 00:20:35.034 "trsvcid": "4420" 00:20:35.034 }, 00:20:35.034 "peer_address": { 00:20:35.034 "trtype": "RDMA", 00:20:35.034 "adrfam": "IPv4", 00:20:35.034 "traddr": "192.168.100.8", 00:20:35.034 "trsvcid": "50063" 00:20:35.034 }, 00:20:35.034 "auth": { 00:20:35.034 "state": "completed", 00:20:35.034 "digest": "sha384", 00:20:35.034 "dhgroup": "null" 00:20:35.034 } 00:20:35.034 } 00:20:35.034 ]' 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.034 02:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.293 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:35.293 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:35.884 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.142 02:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.400 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.657 00:20:36.657 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.657 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.657 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.915 { 00:20:36.915 "cntlid": 57, 00:20:36.915 "qid": 0, 00:20:36.915 "state": "enabled", 00:20:36.915 "thread": "nvmf_tgt_poll_group_000", 00:20:36.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:36.915 "listen_address": { 00:20:36.915 "trtype": "RDMA", 00:20:36.915 "adrfam": "IPv4", 00:20:36.915 "traddr": "192.168.100.8", 00:20:36.915 "trsvcid": "4420" 00:20:36.915 }, 00:20:36.915 "peer_address": { 00:20:36.915 "trtype": "RDMA", 00:20:36.915 "adrfam": "IPv4", 00:20:36.915 "traddr": "192.168.100.8", 00:20:36.915 "trsvcid": "60515" 00:20:36.915 }, 00:20:36.915 "auth": { 00:20:36.915 "state": "completed", 00:20:36.915 "digest": "sha384", 00:20:36.915 "dhgroup": "ffdhe2048" 00:20:36.915 } 00:20:36.915 } 00:20:36.915 ]' 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.915 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.174 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:37.174 02:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:37.740 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.998 02:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.256 00:20:38.256 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.256 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.256 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.515 { 00:20:38.515 "cntlid": 59, 00:20:38.515 "qid": 0, 00:20:38.515 "state": "enabled", 00:20:38.515 "thread": "nvmf_tgt_poll_group_000", 00:20:38.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:38.515 "listen_address": { 00:20:38.515 "trtype": "RDMA", 00:20:38.515 "adrfam": "IPv4", 00:20:38.515 "traddr": "192.168.100.8", 00:20:38.515 "trsvcid": "4420" 00:20:38.515 }, 00:20:38.515 "peer_address": { 00:20:38.515 "trtype": "RDMA", 00:20:38.515 "adrfam": "IPv4", 00:20:38.515 "traddr": "192.168.100.8", 00:20:38.515 "trsvcid": "58142" 00:20:38.515 }, 00:20:38.515 "auth": { 00:20:38.515 "state": "completed", 00:20:38.515 "digest": "sha384", 00:20:38.515 "dhgroup": "ffdhe2048" 00:20:38.515 } 00:20:38.515 } 00:20:38.515 ]' 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.515 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.773 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.773 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.773 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.773 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.773 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.032 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:39.032 02:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.600 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.860 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.119 00:20:40.119 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.119 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.119 02:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.377 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.377 { 00:20:40.378 "cntlid": 61, 00:20:40.378 "qid": 0, 00:20:40.378 "state": "enabled", 00:20:40.378 "thread": "nvmf_tgt_poll_group_000", 00:20:40.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:40.378 "listen_address": { 00:20:40.378 "trtype": "RDMA", 00:20:40.378 "adrfam": "IPv4", 00:20:40.378 "traddr": "192.168.100.8", 00:20:40.378 "trsvcid": "4420" 00:20:40.378 }, 00:20:40.378 "peer_address": { 00:20:40.378 "trtype": "RDMA", 00:20:40.378 "adrfam": "IPv4", 00:20:40.378 "traddr": "192.168.100.8", 00:20:40.378 "trsvcid": "57000" 00:20:40.378 }, 00:20:40.378 "auth": { 00:20:40.378 "state": "completed", 00:20:40.378 "digest": "sha384", 00:20:40.378 "dhgroup": "ffdhe2048" 00:20:40.378 } 00:20:40.378 } 00:20:40.378 ]' 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.378 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.637 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:40.637 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:41.204 02:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.462 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.721 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.981 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.981 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.240 { 00:20:42.240 "cntlid": 63, 00:20:42.240 "qid": 0, 00:20:42.240 "state": "enabled", 00:20:42.240 "thread": "nvmf_tgt_poll_group_000", 00:20:42.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:42.240 "listen_address": { 00:20:42.240 "trtype": "RDMA", 00:20:42.240 "adrfam": "IPv4", 00:20:42.240 "traddr": "192.168.100.8", 00:20:42.240 "trsvcid": "4420" 00:20:42.240 }, 00:20:42.240 "peer_address": { 00:20:42.240 "trtype": "RDMA", 00:20:42.240 "adrfam": "IPv4", 00:20:42.240 "traddr": "192.168.100.8", 00:20:42.240 "trsvcid": "58904" 00:20:42.240 }, 00:20:42.240 "auth": { 00:20:42.240 "state": "completed", 00:20:42.240 "digest": "sha384", 00:20:42.240 "dhgroup": "ffdhe2048" 00:20:42.240 } 00:20:42.240 } 00:20:42.240 ]' 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.240 02:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.500 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:42.500 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.068 02:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.327 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.586 00:20:43.586 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.586 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.587 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.846 { 00:20:43.846 "cntlid": 65, 00:20:43.846 "qid": 0, 00:20:43.846 "state": "enabled", 00:20:43.846 "thread": "nvmf_tgt_poll_group_000", 00:20:43.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:43.846 "listen_address": { 00:20:43.846 "trtype": "RDMA", 00:20:43.846 "adrfam": "IPv4", 00:20:43.846 "traddr": "192.168.100.8", 00:20:43.846 "trsvcid": "4420" 00:20:43.846 }, 00:20:43.846 "peer_address": { 00:20:43.846 "trtype": "RDMA", 00:20:43.846 "adrfam": "IPv4", 00:20:43.846 "traddr": "192.168.100.8", 00:20:43.846 "trsvcid": "55878" 00:20:43.846 }, 00:20:43.846 "auth": { 00:20:43.846 "state": "completed", 00:20:43.846 "digest": "sha384", 00:20:43.846 "dhgroup": "ffdhe3072" 00:20:43.846 } 00:20:43.846 } 00:20:43.846 ]' 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.846 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.105 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.105 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.105 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.105 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:44.105 02:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:44.673 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.932 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.192 02:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.451 00:20:45.451 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.451 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.451 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.710 { 00:20:45.710 "cntlid": 67, 00:20:45.710 "qid": 0, 00:20:45.710 "state": "enabled", 00:20:45.710 "thread": "nvmf_tgt_poll_group_000", 00:20:45.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:45.710 "listen_address": { 00:20:45.710 "trtype": "RDMA", 00:20:45.710 "adrfam": "IPv4", 00:20:45.710 "traddr": "192.168.100.8", 00:20:45.710 "trsvcid": "4420" 00:20:45.710 }, 00:20:45.710 "peer_address": { 00:20:45.710 "trtype": "RDMA", 00:20:45.710 "adrfam": "IPv4", 00:20:45.710 "traddr": "192.168.100.8", 00:20:45.710 "trsvcid": "37410" 00:20:45.710 }, 00:20:45.710 "auth": { 00:20:45.710 "state": "completed", 00:20:45.710 "digest": "sha384", 00:20:45.710 "dhgroup": "ffdhe3072" 00:20:45.710 } 00:20:45.710 } 00:20:45.710 ]' 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.710 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.969 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:45.969 02:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:46.538 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.538 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:46.538 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.538 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.796 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.054 00:20:47.054 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.054 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.054 02:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.313 { 00:20:47.313 "cntlid": 69, 00:20:47.313 "qid": 0, 00:20:47.313 "state": "enabled", 00:20:47.313 "thread": "nvmf_tgt_poll_group_000", 00:20:47.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:47.313 "listen_address": { 00:20:47.313 "trtype": "RDMA", 00:20:47.313 "adrfam": "IPv4", 00:20:47.313 "traddr": "192.168.100.8", 00:20:47.313 "trsvcid": "4420" 00:20:47.313 }, 00:20:47.313 "peer_address": { 00:20:47.313 "trtype": "RDMA", 00:20:47.313 "adrfam": "IPv4", 00:20:47.313 "traddr": "192.168.100.8", 00:20:47.313 "trsvcid": "60739" 00:20:47.313 }, 00:20:47.313 "auth": { 00:20:47.313 "state": "completed", 00:20:47.313 "digest": "sha384", 00:20:47.313 "dhgroup": "ffdhe3072" 00:20:47.313 } 00:20:47.313 } 00:20:47.313 ]' 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.313 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.572 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.572 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.572 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.572 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.572 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.831 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:47.831 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:48.399 02:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.399 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.659 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.918 00:20:48.918 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.918 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.918 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.177 { 00:20:49.177 "cntlid": 71, 00:20:49.177 "qid": 0, 00:20:49.177 "state": "enabled", 00:20:49.177 "thread": "nvmf_tgt_poll_group_000", 00:20:49.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:49.177 "listen_address": { 00:20:49.177 "trtype": "RDMA", 00:20:49.177 "adrfam": "IPv4", 00:20:49.177 "traddr": "192.168.100.8", 00:20:49.177 "trsvcid": "4420" 00:20:49.177 }, 00:20:49.177 "peer_address": { 00:20:49.177 "trtype": "RDMA", 00:20:49.177 "adrfam": "IPv4", 00:20:49.177 "traddr": "192.168.100.8", 00:20:49.177 "trsvcid": "53845" 00:20:49.177 }, 00:20:49.177 "auth": { 00:20:49.177 "state": "completed", 00:20:49.177 "digest": "sha384", 00:20:49.177 "dhgroup": "ffdhe3072" 00:20:49.177 } 00:20:49.177 } 00:20:49.177 ]' 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.177 02:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.436 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:49.436 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:50.004 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.264 02:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.264 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.832 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.832 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.832 { 00:20:50.832 "cntlid": 73, 00:20:50.832 "qid": 0, 00:20:50.832 "state": "enabled", 00:20:50.833 "thread": "nvmf_tgt_poll_group_000", 00:20:50.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:50.833 "listen_address": { 00:20:50.833 "trtype": "RDMA", 00:20:50.833 "adrfam": "IPv4", 00:20:50.833 "traddr": "192.168.100.8", 00:20:50.833 "trsvcid": "4420" 00:20:50.833 }, 00:20:50.833 "peer_address": { 00:20:50.833 "trtype": "RDMA", 00:20:50.833 "adrfam": "IPv4", 00:20:50.833 "traddr": "192.168.100.8", 00:20:50.833 "trsvcid": "40137" 00:20:50.833 }, 00:20:50.833 "auth": { 00:20:50.833 "state": "completed", 00:20:50.833 "digest": "sha384", 00:20:50.833 "dhgroup": "ffdhe4096" 00:20:50.833 } 00:20:50.833 } 00:20:50.833 ]' 00:20:50.833 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.833 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.833 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.092 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.092 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.092 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.092 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.092 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.350 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:51.350 02:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.919 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.182 02:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.441 00:20:52.441 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.441 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.441 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.701 { 00:20:52.701 "cntlid": 75, 00:20:52.701 "qid": 0, 00:20:52.701 "state": "enabled", 00:20:52.701 "thread": "nvmf_tgt_poll_group_000", 00:20:52.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:52.701 "listen_address": { 00:20:52.701 "trtype": "RDMA", 00:20:52.701 "adrfam": "IPv4", 00:20:52.701 "traddr": "192.168.100.8", 00:20:52.701 "trsvcid": "4420" 00:20:52.701 }, 00:20:52.701 "peer_address": { 00:20:52.701 "trtype": "RDMA", 00:20:52.701 "adrfam": "IPv4", 00:20:52.701 "traddr": "192.168.100.8", 00:20:52.701 "trsvcid": "49203" 00:20:52.701 }, 00:20:52.701 "auth": { 00:20:52.701 "state": "completed", 00:20:52.701 "digest": "sha384", 00:20:52.701 "dhgroup": "ffdhe4096" 00:20:52.701 } 00:20:52.701 } 00:20:52.701 ]' 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.701 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.961 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.961 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.961 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.961 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:52.961 02:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:20:53.529 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.789 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.048 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.308 00:20:54.308 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.308 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.308 02:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.567 { 00:20:54.567 "cntlid": 77, 00:20:54.567 "qid": 0, 00:20:54.567 "state": "enabled", 00:20:54.567 "thread": "nvmf_tgt_poll_group_000", 00:20:54.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:54.567 "listen_address": { 00:20:54.567 "trtype": "RDMA", 00:20:54.567 "adrfam": "IPv4", 00:20:54.567 "traddr": "192.168.100.8", 00:20:54.567 "trsvcid": "4420" 00:20:54.567 }, 00:20:54.567 "peer_address": { 00:20:54.567 "trtype": "RDMA", 00:20:54.567 "adrfam": "IPv4", 00:20:54.567 "traddr": "192.168.100.8", 00:20:54.567 "trsvcid": "42215" 00:20:54.567 }, 00:20:54.567 "auth": { 00:20:54.567 "state": "completed", 00:20:54.567 "digest": "sha384", 00:20:54.567 "dhgroup": "ffdhe4096" 00:20:54.567 } 00:20:54.567 } 00:20:54.567 ]' 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.567 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.826 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:54.826 02:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:20:55.394 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.653 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.654 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.654 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.654 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.222 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.222 { 00:20:56.222 "cntlid": 79, 00:20:56.222 "qid": 0, 00:20:56.222 "state": "enabled", 00:20:56.222 "thread": "nvmf_tgt_poll_group_000", 00:20:56.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:56.222 "listen_address": { 00:20:56.222 "trtype": "RDMA", 00:20:56.222 "adrfam": "IPv4", 00:20:56.222 "traddr": "192.168.100.8", 00:20:56.222 "trsvcid": "4420" 00:20:56.222 }, 00:20:56.222 "peer_address": { 00:20:56.222 "trtype": "RDMA", 00:20:56.222 "adrfam": "IPv4", 00:20:56.222 "traddr": "192.168.100.8", 00:20:56.222 "trsvcid": "44298" 00:20:56.222 }, 00:20:56.222 "auth": { 00:20:56.222 "state": "completed", 00:20:56.222 "digest": "sha384", 00:20:56.222 "dhgroup": "ffdhe4096" 00:20:56.222 } 00:20:56.222 } 00:20:56.222 ]' 00:20:56.222 02:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.222 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.222 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.481 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.481 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.481 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.481 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.481 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.740 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:56.740 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:20:57.308 02:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.308 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.567 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.825 00:20:57.825 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.825 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.825 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.084 { 00:20:58.084 "cntlid": 81, 00:20:58.084 "qid": 0, 00:20:58.084 "state": "enabled", 00:20:58.084 "thread": "nvmf_tgt_poll_group_000", 00:20:58.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:58.084 "listen_address": { 00:20:58.084 "trtype": "RDMA", 00:20:58.084 "adrfam": "IPv4", 00:20:58.084 "traddr": "192.168.100.8", 00:20:58.084 "trsvcid": "4420" 00:20:58.084 }, 00:20:58.084 "peer_address": { 00:20:58.084 "trtype": "RDMA", 00:20:58.084 "adrfam": "IPv4", 00:20:58.084 "traddr": "192.168.100.8", 00:20:58.084 "trsvcid": "40116" 00:20:58.084 }, 00:20:58.084 "auth": { 00:20:58.084 "state": "completed", 00:20:58.084 "digest": "sha384", 00:20:58.084 "dhgroup": "ffdhe6144" 00:20:58.084 } 00:20:58.084 } 00:20:58.084 ]' 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.084 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.343 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.343 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.343 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.343 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.343 02:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.602 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:58.603 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.171 02:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.430 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.431 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.431 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.431 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.431 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.431 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.691 00:20:59.691 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.691 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.691 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.950 { 00:20:59.950 "cntlid": 83, 00:20:59.950 "qid": 0, 00:20:59.950 "state": "enabled", 00:20:59.950 "thread": "nvmf_tgt_poll_group_000", 00:20:59.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:20:59.950 "listen_address": { 00:20:59.950 "trtype": "RDMA", 00:20:59.950 "adrfam": "IPv4", 00:20:59.950 "traddr": "192.168.100.8", 00:20:59.950 "trsvcid": "4420" 00:20:59.950 }, 00:20:59.950 "peer_address": { 00:20:59.950 "trtype": "RDMA", 00:20:59.950 "adrfam": "IPv4", 00:20:59.950 "traddr": "192.168.100.8", 00:20:59.950 "trsvcid": "42664" 00:20:59.950 }, 00:20:59.950 "auth": { 00:20:59.950 "state": "completed", 00:20:59.950 "digest": "sha384", 00:20:59.950 "dhgroup": "ffdhe6144" 00:20:59.950 } 00:20:59.950 } 00:20:59.950 ]' 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.950 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.209 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.209 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.209 02:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.209 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:00.209 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:00.776 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.036 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.295 02:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.554 00:21:01.554 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.554 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.554 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.814 { 00:21:01.814 "cntlid": 85, 00:21:01.814 "qid": 0, 00:21:01.814 "state": "enabled", 00:21:01.814 "thread": "nvmf_tgt_poll_group_000", 00:21:01.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:01.814 "listen_address": { 00:21:01.814 "trtype": "RDMA", 00:21:01.814 "adrfam": "IPv4", 00:21:01.814 "traddr": "192.168.100.8", 00:21:01.814 "trsvcid": "4420" 00:21:01.814 }, 00:21:01.814 "peer_address": { 00:21:01.814 "trtype": "RDMA", 00:21:01.814 "adrfam": "IPv4", 00:21:01.814 "traddr": "192.168.100.8", 00:21:01.814 "trsvcid": "51160" 00:21:01.814 }, 00:21:01.814 "auth": { 00:21:01.814 "state": "completed", 00:21:01.814 "digest": "sha384", 00:21:01.814 "dhgroup": "ffdhe6144" 00:21:01.814 } 00:21:01.814 } 00:21:01.814 ]' 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.814 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.074 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.074 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.074 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.074 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:02.074 02:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:02.640 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.898 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.156 02:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.415 00:21:03.415 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.415 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.415 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.673 { 00:21:03.673 "cntlid": 87, 00:21:03.673 "qid": 0, 00:21:03.673 "state": "enabled", 00:21:03.673 "thread": "nvmf_tgt_poll_group_000", 00:21:03.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:03.673 "listen_address": { 00:21:03.673 "trtype": "RDMA", 00:21:03.673 "adrfam": "IPv4", 00:21:03.673 "traddr": "192.168.100.8", 00:21:03.673 "trsvcid": "4420" 00:21:03.673 }, 00:21:03.673 "peer_address": { 00:21:03.673 "trtype": "RDMA", 00:21:03.673 "adrfam": "IPv4", 00:21:03.673 "traddr": "192.168.100.8", 00:21:03.673 "trsvcid": "58937" 00:21:03.673 }, 00:21:03.673 "auth": { 00:21:03.673 "state": "completed", 00:21:03.673 "digest": "sha384", 00:21:03.673 "dhgroup": "ffdhe6144" 00:21:03.673 } 00:21:03.673 } 00:21:03.673 ]' 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.673 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.932 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:03.932 02:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:04.498 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.756 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.014 02:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.272 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.531 { 00:21:05.531 "cntlid": 89, 00:21:05.531 "qid": 0, 00:21:05.531 "state": "enabled", 00:21:05.531 "thread": "nvmf_tgt_poll_group_000", 00:21:05.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:05.531 "listen_address": { 00:21:05.531 "trtype": "RDMA", 00:21:05.531 "adrfam": "IPv4", 00:21:05.531 "traddr": "192.168.100.8", 00:21:05.531 "trsvcid": "4420" 00:21:05.531 }, 00:21:05.531 "peer_address": { 00:21:05.531 "trtype": "RDMA", 00:21:05.531 "adrfam": "IPv4", 00:21:05.531 "traddr": "192.168.100.8", 00:21:05.531 "trsvcid": "42072" 00:21:05.531 }, 00:21:05.531 "auth": { 00:21:05.531 "state": "completed", 00:21:05.531 "digest": "sha384", 00:21:05.531 "dhgroup": "ffdhe8192" 00:21:05.531 } 00:21:05.531 } 00:21:05.531 ]' 00:21:05.531 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.789 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.047 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:06.047 02:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.618 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.957 02:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.267 00:21:07.267 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.267 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.267 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.526 { 00:21:07.526 "cntlid": 91, 00:21:07.526 "qid": 0, 00:21:07.526 "state": "enabled", 00:21:07.526 "thread": "nvmf_tgt_poll_group_000", 00:21:07.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:07.526 "listen_address": { 00:21:07.526 "trtype": "RDMA", 00:21:07.526 "adrfam": "IPv4", 00:21:07.526 "traddr": "192.168.100.8", 00:21:07.526 "trsvcid": "4420" 00:21:07.526 }, 00:21:07.526 "peer_address": { 00:21:07.526 "trtype": "RDMA", 00:21:07.526 "adrfam": "IPv4", 00:21:07.526 "traddr": "192.168.100.8", 00:21:07.526 "trsvcid": "45532" 00:21:07.526 }, 00:21:07.526 "auth": { 00:21:07.526 "state": "completed", 00:21:07.526 "digest": "sha384", 00:21:07.526 "dhgroup": "ffdhe8192" 00:21:07.526 } 00:21:07.526 } 00:21:07.526 ]' 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.526 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.785 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.785 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.785 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.785 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.785 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.044 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:08.044 02:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.610 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.611 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.611 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.870 02:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.438 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.438 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.698 { 00:21:09.698 "cntlid": 93, 00:21:09.698 "qid": 0, 00:21:09.698 "state": "enabled", 00:21:09.698 "thread": "nvmf_tgt_poll_group_000", 00:21:09.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:09.698 "listen_address": { 00:21:09.698 "trtype": "RDMA", 00:21:09.698 "adrfam": "IPv4", 00:21:09.698 "traddr": "192.168.100.8", 00:21:09.698 "trsvcid": "4420" 00:21:09.698 }, 00:21:09.698 "peer_address": { 00:21:09.698 "trtype": "RDMA", 00:21:09.698 "adrfam": "IPv4", 00:21:09.698 "traddr": "192.168.100.8", 00:21:09.698 "trsvcid": "42244" 00:21:09.698 }, 00:21:09.698 "auth": { 00:21:09.698 "state": "completed", 00:21:09.698 "digest": "sha384", 00:21:09.698 "dhgroup": "ffdhe8192" 00:21:09.698 } 00:21:09.698 } 00:21:09.698 ]' 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.698 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.957 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:09.957 02:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.523 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.780 02:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.346 00:21:11.346 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.346 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.346 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.604 { 00:21:11.604 "cntlid": 95, 00:21:11.604 "qid": 0, 00:21:11.604 "state": "enabled", 00:21:11.604 "thread": "nvmf_tgt_poll_group_000", 00:21:11.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:11.604 "listen_address": { 00:21:11.604 "trtype": "RDMA", 00:21:11.604 "adrfam": "IPv4", 00:21:11.604 "traddr": "192.168.100.8", 00:21:11.604 "trsvcid": "4420" 00:21:11.604 }, 00:21:11.604 "peer_address": { 00:21:11.604 "trtype": "RDMA", 00:21:11.604 "adrfam": "IPv4", 00:21:11.604 "traddr": "192.168.100.8", 00:21:11.604 "trsvcid": "53815" 00:21:11.604 }, 00:21:11.604 "auth": { 00:21:11.604 "state": "completed", 00:21:11.604 "digest": "sha384", 00:21:11.604 "dhgroup": "ffdhe8192" 00:21:11.604 } 00:21:11.604 } 00:21:11.604 ]' 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.604 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.862 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:11.862 02:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:12.428 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.686 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.944 00:21:12.944 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.944 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.944 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.202 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.202 { 00:21:13.202 "cntlid": 97, 00:21:13.202 "qid": 0, 00:21:13.202 "state": "enabled", 00:21:13.202 "thread": "nvmf_tgt_poll_group_000", 00:21:13.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:13.203 "listen_address": { 00:21:13.203 "trtype": "RDMA", 00:21:13.203 "adrfam": "IPv4", 00:21:13.203 "traddr": "192.168.100.8", 00:21:13.203 "trsvcid": "4420" 00:21:13.203 }, 00:21:13.203 "peer_address": { 00:21:13.203 "trtype": "RDMA", 00:21:13.203 "adrfam": "IPv4", 00:21:13.203 "traddr": "192.168.100.8", 00:21:13.203 "trsvcid": "44624" 00:21:13.203 }, 00:21:13.203 "auth": { 00:21:13.203 "state": "completed", 00:21:13.203 "digest": "sha512", 00:21:13.203 "dhgroup": "null" 00:21:13.203 } 00:21:13.203 } 00:21:13.203 ]' 00:21:13.203 02:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.203 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.203 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.460 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.460 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.460 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.460 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.460 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.718 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:13.718 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:14.283 02:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.283 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.541 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.799 00:21:14.799 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.799 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.799 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.058 { 00:21:15.058 "cntlid": 99, 00:21:15.058 "qid": 0, 00:21:15.058 "state": "enabled", 00:21:15.058 "thread": "nvmf_tgt_poll_group_000", 00:21:15.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:15.058 "listen_address": { 00:21:15.058 "trtype": "RDMA", 00:21:15.058 "adrfam": "IPv4", 00:21:15.058 "traddr": "192.168.100.8", 00:21:15.058 "trsvcid": "4420" 00:21:15.058 }, 00:21:15.058 "peer_address": { 00:21:15.058 "trtype": "RDMA", 00:21:15.058 "adrfam": "IPv4", 00:21:15.058 "traddr": "192.168.100.8", 00:21:15.058 "trsvcid": "41755" 00:21:15.058 }, 00:21:15.058 "auth": { 00:21:15.058 "state": "completed", 00:21:15.058 "digest": "sha512", 00:21:15.058 "dhgroup": "null" 00:21:15.058 } 00:21:15.058 } 00:21:15.058 ]' 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.058 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.316 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:15.316 02:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:15.881 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.140 02:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.397 00:21:16.397 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.397 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.397 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.654 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.654 { 00:21:16.654 "cntlid": 101, 00:21:16.654 "qid": 0, 00:21:16.654 "state": "enabled", 00:21:16.654 "thread": "nvmf_tgt_poll_group_000", 00:21:16.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:16.654 "listen_address": { 00:21:16.654 "trtype": "RDMA", 00:21:16.654 "adrfam": "IPv4", 00:21:16.654 "traddr": "192.168.100.8", 00:21:16.654 "trsvcid": "4420" 00:21:16.654 }, 00:21:16.654 "peer_address": { 00:21:16.654 "trtype": "RDMA", 00:21:16.654 "adrfam": "IPv4", 00:21:16.654 "traddr": "192.168.100.8", 00:21:16.654 "trsvcid": "40750" 00:21:16.654 }, 00:21:16.654 "auth": { 00:21:16.654 "state": "completed", 00:21:16.654 "digest": "sha512", 00:21:16.655 "dhgroup": "null" 00:21:16.655 } 00:21:16.655 } 00:21:16.655 ]' 00:21:16.655 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.655 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.655 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.913 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.913 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.913 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.913 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.913 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.171 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:17.171 02:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.738 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.997 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.257 00:21:18.257 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.257 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.257 02:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.516 { 00:21:18.516 "cntlid": 103, 00:21:18.516 "qid": 0, 00:21:18.516 "state": "enabled", 00:21:18.516 "thread": "nvmf_tgt_poll_group_000", 00:21:18.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:18.516 "listen_address": { 00:21:18.516 "trtype": "RDMA", 00:21:18.516 "adrfam": "IPv4", 00:21:18.516 "traddr": "192.168.100.8", 00:21:18.516 "trsvcid": "4420" 00:21:18.516 }, 00:21:18.516 "peer_address": { 00:21:18.516 "trtype": "RDMA", 00:21:18.516 "adrfam": "IPv4", 00:21:18.516 "traddr": "192.168.100.8", 00:21:18.516 "trsvcid": "39817" 00:21:18.516 }, 00:21:18.516 "auth": { 00:21:18.516 "state": "completed", 00:21:18.516 "digest": "sha512", 00:21:18.516 "dhgroup": "null" 00:21:18.516 } 00:21:18.516 } 00:21:18.516 ]' 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.516 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.774 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:18.774 02:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:19.341 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.599 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.858 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.859 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.859 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.117 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.117 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.117 { 00:21:20.117 "cntlid": 105, 00:21:20.117 "qid": 0, 00:21:20.117 "state": "enabled", 00:21:20.117 "thread": "nvmf_tgt_poll_group_000", 00:21:20.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:20.117 "listen_address": { 00:21:20.117 "trtype": "RDMA", 00:21:20.117 "adrfam": "IPv4", 00:21:20.117 "traddr": "192.168.100.8", 00:21:20.117 "trsvcid": "4420" 00:21:20.117 }, 00:21:20.117 "peer_address": { 00:21:20.117 "trtype": "RDMA", 00:21:20.117 "adrfam": "IPv4", 00:21:20.117 "traddr": "192.168.100.8", 00:21:20.117 "trsvcid": "58773" 00:21:20.117 }, 00:21:20.118 "auth": { 00:21:20.118 "state": "completed", 00:21:20.118 "digest": "sha512", 00:21:20.118 "dhgroup": "ffdhe2048" 00:21:20.118 } 00:21:20.118 } 00:21:20.118 ]' 00:21:20.118 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.376 02:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.635 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:20.635 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.203 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.204 02:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.463 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.722 00:21:21.722 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.722 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.722 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.981 { 00:21:21.981 "cntlid": 107, 00:21:21.981 "qid": 0, 00:21:21.981 "state": "enabled", 00:21:21.981 "thread": "nvmf_tgt_poll_group_000", 00:21:21.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:21.981 "listen_address": { 00:21:21.981 "trtype": "RDMA", 00:21:21.981 "adrfam": "IPv4", 00:21:21.981 "traddr": "192.168.100.8", 00:21:21.981 "trsvcid": "4420" 00:21:21.981 }, 00:21:21.981 "peer_address": { 00:21:21.981 "trtype": "RDMA", 00:21:21.981 "adrfam": "IPv4", 00:21:21.981 "traddr": "192.168.100.8", 00:21:21.981 "trsvcid": "35021" 00:21:21.981 }, 00:21:21.981 "auth": { 00:21:21.981 "state": "completed", 00:21:21.981 "digest": "sha512", 00:21:21.981 "dhgroup": "ffdhe2048" 00:21:21.981 } 00:21:21.981 } 00:21:21.981 ]' 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.981 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.241 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:22.241 02:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:22.809 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.069 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.328 02:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.588 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.588 { 00:21:23.588 "cntlid": 109, 00:21:23.588 "qid": 0, 00:21:23.588 "state": "enabled", 00:21:23.588 "thread": "nvmf_tgt_poll_group_000", 00:21:23.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:23.588 "listen_address": { 00:21:23.588 "trtype": "RDMA", 00:21:23.588 "adrfam": "IPv4", 00:21:23.588 "traddr": "192.168.100.8", 00:21:23.588 "trsvcid": "4420" 00:21:23.588 }, 00:21:23.588 "peer_address": { 00:21:23.588 "trtype": "RDMA", 00:21:23.588 "adrfam": "IPv4", 00:21:23.588 "traddr": "192.168.100.8", 00:21:23.588 "trsvcid": "38468" 00:21:23.588 }, 00:21:23.588 "auth": { 00:21:23.588 "state": "completed", 00:21:23.588 "digest": "sha512", 00:21:23.588 "dhgroup": "ffdhe2048" 00:21:23.588 } 00:21:23.588 } 00:21:23.588 ]' 00:21:23.588 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.847 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.106 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:24.106 02:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.674 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.933 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.934 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.193 00:21:25.193 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.193 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.193 02:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.452 { 00:21:25.452 "cntlid": 111, 00:21:25.452 "qid": 0, 00:21:25.452 "state": "enabled", 00:21:25.452 "thread": "nvmf_tgt_poll_group_000", 00:21:25.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:25.452 "listen_address": { 00:21:25.452 "trtype": "RDMA", 00:21:25.452 "adrfam": "IPv4", 00:21:25.452 "traddr": "192.168.100.8", 00:21:25.452 "trsvcid": "4420" 00:21:25.452 }, 00:21:25.452 "peer_address": { 00:21:25.452 "trtype": "RDMA", 00:21:25.452 "adrfam": "IPv4", 00:21:25.452 "traddr": "192.168.100.8", 00:21:25.452 "trsvcid": "47417" 00:21:25.452 }, 00:21:25.452 "auth": { 00:21:25.452 "state": "completed", 00:21:25.452 "digest": "sha512", 00:21:25.452 "dhgroup": "ffdhe2048" 00:21:25.452 } 00:21:25.452 } 00:21:25.452 ]' 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.452 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.712 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:25.712 02:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:26.282 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.544 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.803 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.804 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.063 00:21:27.063 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.063 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.063 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.322 { 00:21:27.322 "cntlid": 113, 00:21:27.322 "qid": 0, 00:21:27.322 "state": "enabled", 00:21:27.322 "thread": "nvmf_tgt_poll_group_000", 00:21:27.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:27.322 "listen_address": { 00:21:27.322 "trtype": "RDMA", 00:21:27.322 "adrfam": "IPv4", 00:21:27.322 "traddr": "192.168.100.8", 00:21:27.322 "trsvcid": "4420" 00:21:27.322 }, 00:21:27.322 "peer_address": { 00:21:27.322 "trtype": "RDMA", 00:21:27.322 "adrfam": "IPv4", 00:21:27.322 "traddr": "192.168.100.8", 00:21:27.322 "trsvcid": "49982" 00:21:27.322 }, 00:21:27.322 "auth": { 00:21:27.322 "state": "completed", 00:21:27.322 "digest": "sha512", 00:21:27.322 "dhgroup": "ffdhe3072" 00:21:27.322 } 00:21:27.322 } 00:21:27.322 ]' 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.322 02:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.322 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.322 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.322 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.582 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:27.582 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:28.150 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.150 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:28.150 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.150 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.409 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.409 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.409 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.410 02:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.410 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.669 00:21:28.669 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.669 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.669 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.928 { 00:21:28.928 "cntlid": 115, 00:21:28.928 "qid": 0, 00:21:28.928 "state": "enabled", 00:21:28.928 "thread": "nvmf_tgt_poll_group_000", 00:21:28.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:28.928 "listen_address": { 00:21:28.928 "trtype": "RDMA", 00:21:28.928 "adrfam": "IPv4", 00:21:28.928 "traddr": "192.168.100.8", 00:21:28.928 "trsvcid": "4420" 00:21:28.928 }, 00:21:28.928 "peer_address": { 00:21:28.928 "trtype": "RDMA", 00:21:28.928 "adrfam": "IPv4", 00:21:28.928 "traddr": "192.168.100.8", 00:21:28.928 "trsvcid": "34002" 00:21:28.928 }, 00:21:28.928 "auth": { 00:21:28.928 "state": "completed", 00:21:28.928 "digest": "sha512", 00:21:28.928 "dhgroup": "ffdhe3072" 00:21:28.928 } 00:21:28.928 } 00:21:28.928 ]' 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.928 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.187 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.187 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.187 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.187 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.187 02:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.446 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:29.446 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.014 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.273 02:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.532 00:21:30.532 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.532 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.532 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.792 { 00:21:30.792 "cntlid": 117, 00:21:30.792 "qid": 0, 00:21:30.792 "state": "enabled", 00:21:30.792 "thread": "nvmf_tgt_poll_group_000", 00:21:30.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:30.792 "listen_address": { 00:21:30.792 "trtype": "RDMA", 00:21:30.792 "adrfam": "IPv4", 00:21:30.792 "traddr": "192.168.100.8", 00:21:30.792 "trsvcid": "4420" 00:21:30.792 }, 00:21:30.792 "peer_address": { 00:21:30.792 "trtype": "RDMA", 00:21:30.792 "adrfam": "IPv4", 00:21:30.792 "traddr": "192.168.100.8", 00:21:30.792 "trsvcid": "49113" 00:21:30.792 }, 00:21:30.792 "auth": { 00:21:30.792 "state": "completed", 00:21:30.792 "digest": "sha512", 00:21:30.792 "dhgroup": "ffdhe3072" 00:21:30.792 } 00:21:30.792 } 00:21:30.792 ]' 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.792 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.051 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.051 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.051 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.051 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:31.051 02:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:31.619 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.878 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.137 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.138 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.138 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.138 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.138 02:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.397 00:21:32.397 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.397 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.397 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.657 { 00:21:32.657 "cntlid": 119, 00:21:32.657 "qid": 0, 00:21:32.657 "state": "enabled", 00:21:32.657 "thread": "nvmf_tgt_poll_group_000", 00:21:32.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:32.657 "listen_address": { 00:21:32.657 "trtype": "RDMA", 00:21:32.657 "adrfam": "IPv4", 00:21:32.657 "traddr": "192.168.100.8", 00:21:32.657 "trsvcid": "4420" 00:21:32.657 }, 00:21:32.657 "peer_address": { 00:21:32.657 "trtype": "RDMA", 00:21:32.657 "adrfam": "IPv4", 00:21:32.657 "traddr": "192.168.100.8", 00:21:32.657 "trsvcid": "60198" 00:21:32.657 }, 00:21:32.657 "auth": { 00:21:32.657 "state": "completed", 00:21:32.657 "digest": "sha512", 00:21:32.657 "dhgroup": "ffdhe3072" 00:21:32.657 } 00:21:32.657 } 00:21:32.657 ]' 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.657 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.916 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:32.916 02:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:33.484 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.744 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.003 00:21:34.003 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.003 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.003 02:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.262 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.262 { 00:21:34.262 "cntlid": 121, 00:21:34.262 "qid": 0, 00:21:34.262 "state": "enabled", 00:21:34.262 "thread": "nvmf_tgt_poll_group_000", 00:21:34.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:34.262 "listen_address": { 00:21:34.262 "trtype": "RDMA", 00:21:34.262 "adrfam": "IPv4", 00:21:34.262 "traddr": "192.168.100.8", 00:21:34.262 "trsvcid": "4420" 00:21:34.262 }, 00:21:34.262 "peer_address": { 00:21:34.262 "trtype": "RDMA", 00:21:34.262 "adrfam": "IPv4", 00:21:34.262 "traddr": "192.168.100.8", 00:21:34.262 "trsvcid": "36193" 00:21:34.262 }, 00:21:34.262 "auth": { 00:21:34.262 "state": "completed", 00:21:34.262 "digest": "sha512", 00:21:34.263 "dhgroup": "ffdhe4096" 00:21:34.263 } 00:21:34.263 } 00:21:34.263 ]' 00:21:34.263 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.263 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.263 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.522 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.522 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.522 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.522 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.522 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.781 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:34.781 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:35.349 02:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.349 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.609 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.868 00:21:35.868 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.868 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.868 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.127 { 00:21:36.127 "cntlid": 123, 00:21:36.127 "qid": 0, 00:21:36.127 "state": "enabled", 00:21:36.127 "thread": "nvmf_tgt_poll_group_000", 00:21:36.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:36.127 "listen_address": { 00:21:36.127 "trtype": "RDMA", 00:21:36.127 "adrfam": "IPv4", 00:21:36.127 "traddr": "192.168.100.8", 00:21:36.127 "trsvcid": "4420" 00:21:36.127 }, 00:21:36.127 "peer_address": { 00:21:36.127 "trtype": "RDMA", 00:21:36.127 "adrfam": "IPv4", 00:21:36.127 "traddr": "192.168.100.8", 00:21:36.127 "trsvcid": "53088" 00:21:36.127 }, 00:21:36.127 "auth": { 00:21:36.127 "state": "completed", 00:21:36.127 "digest": "sha512", 00:21:36.127 "dhgroup": "ffdhe4096" 00:21:36.127 } 00:21:36.127 } 00:21:36.127 ]' 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.127 02:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.386 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:36.386 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:36.954 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.213 02:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.472 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.732 00:21:37.732 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.732 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.732 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.991 { 00:21:37.991 "cntlid": 125, 00:21:37.991 "qid": 0, 00:21:37.991 "state": "enabled", 00:21:37.991 "thread": "nvmf_tgt_poll_group_000", 00:21:37.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:37.991 "listen_address": { 00:21:37.991 "trtype": "RDMA", 00:21:37.991 "adrfam": "IPv4", 00:21:37.991 "traddr": "192.168.100.8", 00:21:37.991 "trsvcid": "4420" 00:21:37.991 }, 00:21:37.991 "peer_address": { 00:21:37.991 "trtype": "RDMA", 00:21:37.991 "adrfam": "IPv4", 00:21:37.991 "traddr": "192.168.100.8", 00:21:37.991 "trsvcid": "36247" 00:21:37.991 }, 00:21:37.991 "auth": { 00:21:37.991 "state": "completed", 00:21:37.991 "digest": "sha512", 00:21:37.991 "dhgroup": "ffdhe4096" 00:21:37.991 } 00:21:37.991 } 00:21:37.991 ]' 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.991 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.250 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:38.250 02:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:38.818 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.077 02:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.337 00:21:39.337 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.337 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.337 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.596 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.596 { 00:21:39.596 "cntlid": 127, 00:21:39.596 "qid": 0, 00:21:39.596 "state": "enabled", 00:21:39.596 "thread": "nvmf_tgt_poll_group_000", 00:21:39.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:39.597 "listen_address": { 00:21:39.597 "trtype": "RDMA", 00:21:39.597 "adrfam": "IPv4", 00:21:39.597 "traddr": "192.168.100.8", 00:21:39.597 "trsvcid": "4420" 00:21:39.597 }, 00:21:39.597 "peer_address": { 00:21:39.597 "trtype": "RDMA", 00:21:39.597 "adrfam": "IPv4", 00:21:39.597 "traddr": "192.168.100.8", 00:21:39.597 "trsvcid": "45425" 00:21:39.597 }, 00:21:39.597 "auth": { 00:21:39.597 "state": "completed", 00:21:39.597 "digest": "sha512", 00:21:39.597 "dhgroup": "ffdhe4096" 00:21:39.597 } 00:21:39.597 } 00:21:39.597 ]' 00:21:39.597 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.597 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.597 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.597 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.597 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.855 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.855 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.855 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.856 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:39.856 02:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:40.423 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.681 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.938 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.196 00:21:41.196 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.196 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.196 02:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.453 { 00:21:41.453 "cntlid": 129, 00:21:41.453 "qid": 0, 00:21:41.453 "state": "enabled", 00:21:41.453 "thread": "nvmf_tgt_poll_group_000", 00:21:41.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:41.453 "listen_address": { 00:21:41.453 "trtype": "RDMA", 00:21:41.453 "adrfam": "IPv4", 00:21:41.453 "traddr": "192.168.100.8", 00:21:41.453 "trsvcid": "4420" 00:21:41.453 }, 00:21:41.453 "peer_address": { 00:21:41.453 "trtype": "RDMA", 00:21:41.453 "adrfam": "IPv4", 00:21:41.453 "traddr": "192.168.100.8", 00:21:41.453 "trsvcid": "47000" 00:21:41.453 }, 00:21:41.453 "auth": { 00:21:41.453 "state": "completed", 00:21:41.453 "digest": "sha512", 00:21:41.453 "dhgroup": "ffdhe6144" 00:21:41.453 } 00:21:41.453 } 00:21:41.453 ]' 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.453 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.710 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.710 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.710 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.710 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:41.710 02:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:42.275 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.532 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.790 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.047 00:21:43.047 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.047 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.047 02:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.304 { 00:21:43.304 "cntlid": 131, 00:21:43.304 "qid": 0, 00:21:43.304 "state": "enabled", 00:21:43.304 "thread": "nvmf_tgt_poll_group_000", 00:21:43.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:43.304 "listen_address": { 00:21:43.304 "trtype": "RDMA", 00:21:43.304 "adrfam": "IPv4", 00:21:43.304 "traddr": "192.168.100.8", 00:21:43.304 "trsvcid": "4420" 00:21:43.304 }, 00:21:43.304 "peer_address": { 00:21:43.304 "trtype": "RDMA", 00:21:43.304 "adrfam": "IPv4", 00:21:43.304 "traddr": "192.168.100.8", 00:21:43.304 "trsvcid": "41548" 00:21:43.304 }, 00:21:43.304 "auth": { 00:21:43.304 "state": "completed", 00:21:43.304 "digest": "sha512", 00:21:43.304 "dhgroup": "ffdhe6144" 00:21:43.304 } 00:21:43.304 } 00:21:43.304 ]' 00:21:43.304 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.305 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.305 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.562 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.562 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.562 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.562 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.562 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.819 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:43.819 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:44.387 02:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.387 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.701 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.702 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.983 00:21:44.983 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.983 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.983 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.271 { 00:21:45.271 "cntlid": 133, 00:21:45.271 "qid": 0, 00:21:45.271 "state": "enabled", 00:21:45.271 "thread": "nvmf_tgt_poll_group_000", 00:21:45.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:45.271 "listen_address": { 00:21:45.271 "trtype": "RDMA", 00:21:45.271 "adrfam": "IPv4", 00:21:45.271 "traddr": "192.168.100.8", 00:21:45.271 "trsvcid": "4420" 00:21:45.271 }, 00:21:45.271 "peer_address": { 00:21:45.271 "trtype": "RDMA", 00:21:45.271 "adrfam": "IPv4", 00:21:45.271 "traddr": "192.168.100.8", 00:21:45.271 "trsvcid": "48177" 00:21:45.271 }, 00:21:45.271 "auth": { 00:21:45.271 "state": "completed", 00:21:45.271 "digest": "sha512", 00:21:45.271 "dhgroup": "ffdhe6144" 00:21:45.271 } 00:21:45.271 } 00:21:45.271 ]' 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.271 02:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.271 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.271 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.271 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.530 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:45.530 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.098 02:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.356 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.923 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.923 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.923 { 00:21:46.923 "cntlid": 135, 00:21:46.923 "qid": 0, 00:21:46.923 "state": "enabled", 00:21:46.923 "thread": "nvmf_tgt_poll_group_000", 00:21:46.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:46.923 "listen_address": { 00:21:46.923 "trtype": "RDMA", 00:21:46.923 "adrfam": "IPv4", 00:21:46.923 "traddr": "192.168.100.8", 00:21:46.923 "trsvcid": "4420" 00:21:46.923 }, 00:21:46.923 "peer_address": { 00:21:46.923 "trtype": "RDMA", 00:21:46.923 "adrfam": "IPv4", 00:21:46.923 "traddr": "192.168.100.8", 00:21:46.923 "trsvcid": "42880" 00:21:46.923 }, 00:21:46.923 "auth": { 00:21:46.923 "state": "completed", 00:21:46.924 "digest": "sha512", 00:21:46.924 "dhgroup": "ffdhe6144" 00:21:46.924 } 00:21:46.924 } 00:21:46.924 ]' 00:21:46.924 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.924 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.924 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.183 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.183 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.183 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.183 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.183 02:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.442 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:47.442 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.010 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.270 02:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.838 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.838 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.839 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.839 { 00:21:48.839 "cntlid": 137, 00:21:48.839 "qid": 0, 00:21:48.839 "state": "enabled", 00:21:48.839 "thread": "nvmf_tgt_poll_group_000", 00:21:48.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:48.839 "listen_address": { 00:21:48.839 "trtype": "RDMA", 00:21:48.839 "adrfam": "IPv4", 00:21:48.839 "traddr": "192.168.100.8", 00:21:48.839 "trsvcid": "4420" 00:21:48.839 }, 00:21:48.839 "peer_address": { 00:21:48.839 "trtype": "RDMA", 00:21:48.839 "adrfam": "IPv4", 00:21:48.839 "traddr": "192.168.100.8", 00:21:48.839 "trsvcid": "57457" 00:21:48.839 }, 00:21:48.839 "auth": { 00:21:48.839 "state": "completed", 00:21:48.839 "digest": "sha512", 00:21:48.839 "dhgroup": "ffdhe8192" 00:21:48.839 } 00:21:48.839 } 00:21:48.839 ]' 00:21:48.839 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.839 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.839 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.097 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.097 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.097 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.097 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.097 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.356 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:49.356 02:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.923 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.183 02:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.751 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.751 { 00:21:50.751 "cntlid": 139, 00:21:50.751 "qid": 0, 00:21:50.751 "state": "enabled", 00:21:50.751 "thread": "nvmf_tgt_poll_group_000", 00:21:50.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:50.751 "listen_address": { 00:21:50.751 "trtype": "RDMA", 00:21:50.751 "adrfam": "IPv4", 00:21:50.751 "traddr": "192.168.100.8", 00:21:50.751 "trsvcid": "4420" 00:21:50.751 }, 00:21:50.751 "peer_address": { 00:21:50.751 "trtype": "RDMA", 00:21:50.751 "adrfam": "IPv4", 00:21:50.751 "traddr": "192.168.100.8", 00:21:50.751 "trsvcid": "51566" 00:21:50.751 }, 00:21:50.751 "auth": { 00:21:50.751 "state": "completed", 00:21:50.751 "digest": "sha512", 00:21:50.751 "dhgroup": "ffdhe8192" 00:21:50.751 } 00:21:50.751 } 00:21:50.751 ]' 00:21:50.751 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.010 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.269 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:51.269 02:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: --dhchap-ctrl-secret DHHC-1:02:MDk2NGQ0NTJlNDM4ZTE2NDY0MWJhZTU0NGE1ZjM0OWZkYTA1MTczYTE4NDZiNWE5OoM71Q==: 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.837 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.096 02:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.664 00:21:52.664 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.664 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.664 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.924 { 00:21:52.924 "cntlid": 141, 00:21:52.924 "qid": 0, 00:21:52.924 "state": "enabled", 00:21:52.924 "thread": "nvmf_tgt_poll_group_000", 00:21:52.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:52.924 "listen_address": { 00:21:52.924 "trtype": "RDMA", 00:21:52.924 "adrfam": "IPv4", 00:21:52.924 "traddr": "192.168.100.8", 00:21:52.924 "trsvcid": "4420" 00:21:52.924 }, 00:21:52.924 "peer_address": { 00:21:52.924 "trtype": "RDMA", 00:21:52.924 "adrfam": "IPv4", 00:21:52.924 "traddr": "192.168.100.8", 00:21:52.924 "trsvcid": "38176" 00:21:52.924 }, 00:21:52.924 "auth": { 00:21:52.924 "state": "completed", 00:21:52.924 "digest": "sha512", 00:21:52.924 "dhgroup": "ffdhe8192" 00:21:52.924 } 00:21:52.924 } 00:21:52.924 ]' 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.924 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.183 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:53.183 02:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2Y1OWFkOWFlNDJjOWUxZjUwNmJiM2JhY2E3ZWNlODF3+Bzc: 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.751 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.010 02:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.578 00:21:54.578 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.578 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.578 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.837 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.837 { 00:21:54.837 "cntlid": 143, 00:21:54.837 "qid": 0, 00:21:54.837 "state": "enabled", 00:21:54.837 "thread": "nvmf_tgt_poll_group_000", 00:21:54.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:54.838 "listen_address": { 00:21:54.838 "trtype": "RDMA", 00:21:54.838 "adrfam": "IPv4", 00:21:54.838 "traddr": "192.168.100.8", 00:21:54.838 "trsvcid": "4420" 00:21:54.838 }, 00:21:54.838 "peer_address": { 00:21:54.838 "trtype": "RDMA", 00:21:54.838 "adrfam": "IPv4", 00:21:54.838 "traddr": "192.168.100.8", 00:21:54.838 "trsvcid": "43020" 00:21:54.838 }, 00:21:54.838 "auth": { 00:21:54.838 "state": "completed", 00:21:54.838 "digest": "sha512", 00:21:54.838 "dhgroup": "ffdhe8192" 00:21:54.838 } 00:21:54.838 } 00:21:54.838 ]' 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.838 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.097 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:55.097 02:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:55.665 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.925 02:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.493 00:21:56.493 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.493 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.493 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.752 { 00:21:56.752 "cntlid": 145, 00:21:56.752 "qid": 0, 00:21:56.752 "state": "enabled", 00:21:56.752 "thread": "nvmf_tgt_poll_group_000", 00:21:56.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:56.752 "listen_address": { 00:21:56.752 "trtype": "RDMA", 00:21:56.752 "adrfam": "IPv4", 00:21:56.752 "traddr": "192.168.100.8", 00:21:56.752 "trsvcid": "4420" 00:21:56.752 }, 00:21:56.752 "peer_address": { 00:21:56.752 "trtype": "RDMA", 00:21:56.752 "adrfam": "IPv4", 00:21:56.752 "traddr": "192.168.100.8", 00:21:56.752 "trsvcid": "37427" 00:21:56.752 }, 00:21:56.752 "auth": { 00:21:56.752 "state": "completed", 00:21:56.752 "digest": "sha512", 00:21:56.752 "dhgroup": "ffdhe8192" 00:21:56.752 } 00:21:56.752 } 00:21:56.752 ]' 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.752 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.011 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:57.011 02:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:00:MzcyZmI3Mjk5MjJiYTMxMTIxZWE2MjA1YzFlYjA5MTk2YzFkNjJlMDBjZmFiNjcx4jrz8w==: --dhchap-ctrl-secret DHHC-1:03:YTMzMDcwZTFlYTUwZWEyYjdjOGZhNzk2MGI2ZDAyODFkMDlkOGVmMzk3NGRlNDVhYTJiODJkZmY4MTUwNTc2ZgUHowA=: 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:57.579 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:58.148 request: 00:21:58.148 { 00:21:58.148 "name": "nvme0", 00:21:58.148 "trtype": "rdma", 00:21:58.148 "traddr": "192.168.100.8", 00:21:58.148 "adrfam": "ipv4", 00:21:58.148 "trsvcid": "4420", 00:21:58.148 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:58.148 "prchk_reftag": false, 00:21:58.148 "prchk_guard": false, 00:21:58.148 "hdgst": false, 00:21:58.148 "ddgst": false, 00:21:58.148 "dhchap_key": "key2", 00:21:58.148 "allow_unrecognized_csi": false, 00:21:58.148 "method": "bdev_nvme_attach_controller", 00:21:58.148 "req_id": 1 00:21:58.148 } 00:21:58.148 Got JSON-RPC error response 00:21:58.148 response: 00:21:58.148 { 00:21:58.148 "code": -5, 00:21:58.148 "message": "Input/output error" 00:21:58.148 } 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.148 02:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.717 request: 00:21:58.717 { 00:21:58.717 "name": "nvme0", 00:21:58.717 "trtype": "rdma", 00:21:58.717 "traddr": "192.168.100.8", 00:21:58.717 "adrfam": "ipv4", 00:21:58.717 "trsvcid": "4420", 00:21:58.717 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:58.717 "prchk_reftag": false, 00:21:58.717 "prchk_guard": false, 00:21:58.717 "hdgst": false, 00:21:58.717 "ddgst": false, 00:21:58.717 "dhchap_key": "key1", 00:21:58.717 "dhchap_ctrlr_key": "ckey2", 00:21:58.717 "allow_unrecognized_csi": false, 00:21:58.717 "method": "bdev_nvme_attach_controller", 00:21:58.717 "req_id": 1 00:21:58.717 } 00:21:58.717 Got JSON-RPC error response 00:21:58.717 response: 00:21:58.717 { 00:21:58.717 "code": -5, 00:21:58.717 "message": "Input/output error" 00:21:58.717 } 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.717 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.286 request: 00:21:59.286 { 00:21:59.286 "name": "nvme0", 00:21:59.286 "trtype": "rdma", 00:21:59.286 "traddr": "192.168.100.8", 00:21:59.286 "adrfam": "ipv4", 00:21:59.286 "trsvcid": "4420", 00:21:59.286 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:21:59.286 "prchk_reftag": false, 00:21:59.286 "prchk_guard": false, 00:21:59.286 "hdgst": false, 00:21:59.286 "ddgst": false, 00:21:59.286 "dhchap_key": "key1", 00:21:59.286 "dhchap_ctrlr_key": "ckey1", 00:21:59.286 "allow_unrecognized_csi": false, 00:21:59.286 "method": "bdev_nvme_attach_controller", 00:21:59.286 "req_id": 1 00:21:59.286 } 00:21:59.286 Got JSON-RPC error response 00:21:59.286 response: 00:21:59.286 { 00:21:59.286 "code": -5, 00:21:59.286 "message": "Input/output error" 00:21:59.286 } 00:21:59.286 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:59.286 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.286 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3264220 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3264220 ']' 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3264220 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264220 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264220' 00:21:59.287 killing process with pid 3264220 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3264220 00:21:59.287 02:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3264220 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3283843 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3283843 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3283843 ']' 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.666 02:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3283843 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3283843 ']' 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.603 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.862 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.862 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:01.862 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:01.862 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.862 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.121 null0 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UdU 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Gp9 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gp9 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A9t 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.6ne ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ne 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nfn 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.GfP ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GfP 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YfU 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.122 02:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.059 nvme0n1 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.059 { 00:22:03.059 "cntlid": 1, 00:22:03.059 "qid": 0, 00:22:03.059 "state": "enabled", 00:22:03.059 "thread": "nvmf_tgt_poll_group_000", 00:22:03.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:22:03.059 "listen_address": { 00:22:03.059 "trtype": "RDMA", 00:22:03.059 "adrfam": "IPv4", 00:22:03.059 "traddr": "192.168.100.8", 00:22:03.059 "trsvcid": "4420" 00:22:03.059 }, 00:22:03.059 "peer_address": { 00:22:03.059 "trtype": "RDMA", 00:22:03.059 "adrfam": "IPv4", 00:22:03.059 "traddr": "192.168.100.8", 00:22:03.059 "trsvcid": "56239" 00:22:03.059 }, 00:22:03.059 "auth": { 00:22:03.059 "state": "completed", 00:22:03.059 "digest": "sha512", 00:22:03.059 "dhgroup": "ffdhe8192" 00:22:03.059 } 00:22:03.059 } 00:22:03.059 ]' 00:22:03.059 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.319 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.319 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.319 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.319 02:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.319 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.319 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.319 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.578 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:22:03.578 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:22:04.145 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.145 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:04.145 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.145 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key3 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:04.404 02:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.404 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.663 request: 00:22:04.663 { 00:22:04.663 "name": "nvme0", 00:22:04.663 "trtype": "rdma", 00:22:04.663 "traddr": "192.168.100.8", 00:22:04.663 "adrfam": "ipv4", 00:22:04.663 "trsvcid": "4420", 00:22:04.663 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:22:04.663 "prchk_reftag": false, 00:22:04.663 "prchk_guard": false, 00:22:04.663 "hdgst": false, 00:22:04.663 "ddgst": false, 00:22:04.663 "dhchap_key": "key3", 00:22:04.663 "allow_unrecognized_csi": false, 00:22:04.663 "method": "bdev_nvme_attach_controller", 00:22:04.663 "req_id": 1 00:22:04.663 } 00:22:04.664 Got JSON-RPC error response 00:22:04.664 response: 00:22:04.664 { 00:22:04.664 "code": -5, 00:22:04.664 "message": "Input/output error" 00:22:04.664 } 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:04.664 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.922 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.181 request: 00:22:05.181 { 00:22:05.181 "name": "nvme0", 00:22:05.181 "trtype": "rdma", 00:22:05.181 "traddr": "192.168.100.8", 00:22:05.181 "adrfam": "ipv4", 00:22:05.181 "trsvcid": "4420", 00:22:05.181 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:22:05.181 "prchk_reftag": false, 00:22:05.181 "prchk_guard": false, 00:22:05.181 "hdgst": false, 00:22:05.181 "ddgst": false, 00:22:05.181 "dhchap_key": "key3", 00:22:05.181 "allow_unrecognized_csi": false, 00:22:05.181 "method": "bdev_nvme_attach_controller", 00:22:05.181 "req_id": 1 00:22:05.181 } 00:22:05.181 Got JSON-RPC error response 00:22:05.181 response: 00:22:05.181 { 00:22:05.181 "code": -5, 00:22:05.181 "message": "Input/output error" 00:22:05.181 } 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.181 02:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.441 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.700 request: 00:22:05.700 { 00:22:05.700 "name": "nvme0", 00:22:05.700 "trtype": "rdma", 00:22:05.700 "traddr": "192.168.100.8", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "4420", 00:22:05.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:22:05.700 "prchk_reftag": false, 00:22:05.700 "prchk_guard": false, 00:22:05.700 "hdgst": false, 00:22:05.700 "ddgst": false, 00:22:05.700 "dhchap_key": "key0", 00:22:05.700 "dhchap_ctrlr_key": "key1", 00:22:05.700 "allow_unrecognized_csi": false, 00:22:05.700 "method": "bdev_nvme_attach_controller", 00:22:05.700 "req_id": 1 00:22:05.700 } 00:22:05.700 Got JSON-RPC error response 00:22:05.700 response: 00:22:05.700 { 00:22:05.700 "code": -5, 00:22:05.700 "message": "Input/output error" 00:22:05.700 } 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:05.959 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.218 nvme0n1 00:22:06.218 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:06.218 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:06.218 02:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.218 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.218 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.218 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:06.478 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.415 nvme0n1 00:22:07.415 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:07.415 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.415 02:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:07.415 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.674 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.674 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:22:07.674 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid 80e71deb-ee4e-e711-906e-0012795d9712 -l 0 --dhchap-secret DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: --dhchap-ctrl-secret DHHC-1:03:NmI4MjljN2ZjNDdhMDZiMDIzMzhiZTM3MzUxYzYyYTdiNmQ4ZDljM2JjMDJlNjQxZDQ2OTRlNWUwYTViNGJkNT9Qd/4=: 00:22:08.242 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:08.242 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.243 02:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.502 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.069 request: 00:22:09.069 { 00:22:09.069 "name": "nvme0", 00:22:09.069 "trtype": "rdma", 00:22:09.069 "traddr": "192.168.100.8", 00:22:09.069 "adrfam": "ipv4", 00:22:09.069 "trsvcid": "4420", 00:22:09.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712", 00:22:09.069 "prchk_reftag": false, 00:22:09.069 "prchk_guard": false, 00:22:09.069 "hdgst": false, 00:22:09.069 "ddgst": false, 00:22:09.069 "dhchap_key": "key1", 00:22:09.069 "allow_unrecognized_csi": false, 00:22:09.069 "method": "bdev_nvme_attach_controller", 00:22:09.069 "req_id": 1 00:22:09.069 } 00:22:09.069 Got JSON-RPC error response 00:22:09.069 response: 00:22:09.069 { 00:22:09.069 "code": -5, 00:22:09.069 "message": "Input/output error" 00:22:09.069 } 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.069 02:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.006 nvme0n1 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.006 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.265 02:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.524 nvme0n1 00:22:10.524 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:10.524 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:10.524 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: '' 2s 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: 00:22:10.783 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: ]] 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2VmNGIzODNhMGU2YTc0ZGU4OWY2NGJiMmUxYWI2Yma+KdZE: 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:11.042 02:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: 2s 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: ]] 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTZiYTdlOGIzMDkwZWQwZjg5YTQ5MDUxZGQ5NjBmN2ZiMjAzMTkxOTk0OGE3MWU5URqmAA==: 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:12.946 02:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:14.850 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:14.850 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:14.850 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:14.851 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:14.851 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.109 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.110 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.110 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.110 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.110 02:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:16.046 nvme0n1 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.046 02:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.304 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:16.304 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:16.304 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:16.562 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:16.821 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:16.821 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:16.821 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.080 02:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.338 request: 00:22:17.338 { 00:22:17.338 "name": "nvme0", 00:22:17.338 "dhchap_key": "key1", 00:22:17.338 "dhchap_ctrlr_key": "key3", 00:22:17.338 "method": "bdev_nvme_set_keys", 00:22:17.338 "req_id": 1 00:22:17.338 } 00:22:17.338 Got JSON-RPC error response 00:22:17.338 response: 00:22:17.338 { 00:22:17.338 "code": -13, 00:22:17.338 "message": "Permission denied" 00:22:17.338 } 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:17.338 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.596 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:17.596 02:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.970 02:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.536 nvme0n1 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.536 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.102 request: 00:22:20.102 { 00:22:20.102 "name": "nvme0", 00:22:20.102 "dhchap_key": "key2", 00:22:20.102 "dhchap_ctrlr_key": "key0", 00:22:20.102 "method": "bdev_nvme_set_keys", 00:22:20.102 "req_id": 1 00:22:20.102 } 00:22:20.102 Got JSON-RPC error response 00:22:20.102 response: 00:22:20.102 { 00:22:20.102 "code": -13, 00:22:20.102 "message": "Permission denied" 00:22:20.102 } 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.102 02:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:20.360 02:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:20.360 02:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:21.293 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:21.293 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:21.293 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3264396 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3264396 ']' 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3264396 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264396 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.552 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264396' 00:22:21.552 killing process with pid 3264396 00:22:21.553 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3264396 00:22:21.553 02:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3264396 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:24.088 rmmod nvme_rdma 00:22:24.088 rmmod nvme_fabrics 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3283843 ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3283843 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3283843 ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3283843 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3283843 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3283843' 00:22:24.088 killing process with pid 3283843 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3283843 00:22:24.088 02:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3283843 00:22:25.465 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UdU /tmp/spdk.key-sha256.A9t /tmp/spdk.key-sha384.Nfn /tmp/spdk.key-sha512.YfU /tmp/spdk.key-sha512.Gp9 /tmp/spdk.key-sha384.6ne /tmp/spdk.key-sha256.GfP '' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf-auth.log 00:22:25.466 00:22:25.466 real 2m49.830s 00:22:25.466 user 6m27.905s 00:22:25.466 sys 0m25.773s 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.466 ************************************ 00:22:25.466 END TEST nvmf_auth_target 00:22:25.466 ************************************ 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.466 ************************************ 00:22:25.466 START TEST nvmf_fuzz 00:22:25.466 ************************************ 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:25.466 * Looking for test storage... 00:22:25.466 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:22:25.466 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:25.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.725 --rc genhtml_branch_coverage=1 00:22:25.725 --rc genhtml_function_coverage=1 00:22:25.725 --rc genhtml_legend=1 00:22:25.725 --rc geninfo_all_blocks=1 00:22:25.725 --rc geninfo_unexecuted_blocks=1 00:22:25.725 00:22:25.725 ' 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:25.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.725 --rc genhtml_branch_coverage=1 00:22:25.725 --rc genhtml_function_coverage=1 00:22:25.725 --rc genhtml_legend=1 00:22:25.725 --rc geninfo_all_blocks=1 00:22:25.725 --rc geninfo_unexecuted_blocks=1 00:22:25.725 00:22:25.725 ' 00:22:25.725 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:25.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.726 --rc genhtml_branch_coverage=1 00:22:25.726 --rc genhtml_function_coverage=1 00:22:25.726 --rc genhtml_legend=1 00:22:25.726 --rc geninfo_all_blocks=1 00:22:25.726 --rc geninfo_unexecuted_blocks=1 00:22:25.726 00:22:25.726 ' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:25.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.726 --rc genhtml_branch_coverage=1 00:22:25.726 --rc genhtml_function_coverage=1 00:22:25.726 --rc genhtml_legend=1 00:22:25.726 --rc geninfo_all_blocks=1 00:22:25.726 --rc geninfo_unexecuted_blocks=1 00:22:25.726 00:22:25.726 ' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.726 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.726 02:03:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:32.421 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.421 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.421 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.421 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.421 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:22:32.422 Found 0000:18:00.0 (0x8086 - 0x159b) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:22:32.422 Found 0000:18:00.1 (0x8086 - 0x159b) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@403 -- # modinfo irdma 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:22:32.422 Found net devices under 0000:18:00.0: cvl_0_0 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:22:32.422 Found net devices under 0000:18:00.1: cvl_0_1 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # rdma_device_init 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_0 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_1 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:32.422 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:22:32.423 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:32.423 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:22:32.423 altname enp24s0f0np0 00:22:32.423 altname ens785f0np0 00:22:32.423 inet 192.168.100.8/24 scope global cvl_0_0 00:22:32.423 valid_lft forever preferred_lft forever 00:22:32.423 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:22:32.423 valid_lft forever preferred_lft forever 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:22:32.423 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:32.423 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:22:32.423 altname enp24s0f1np1 00:22:32.423 altname ens785f1np1 00:22:32.423 inet 192.168.100.9/24 scope global cvl_0_1 00:22:32.423 valid_lft forever preferred_lft forever 00:22:32.423 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:22:32.423 valid_lft forever preferred_lft forever 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:32.423 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_0 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:32.682 192.168.100.9' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:32.682 192.168.100.9' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # head -n 1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:32.682 192.168.100.9' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # tail -n +2 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # head -n 1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3290223 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3290223 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3290223 ']' 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.682 02:03:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 Malloc0 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:33.619 02:03:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:23:05.694 Fuzzing completed. Shutting down the fuzz application 00:23:05.694 00:23:05.694 Dumping successful admin opcodes: 00:23:05.694 8, 9, 10, 24, 00:23:05.694 Dumping successful io opcodes: 00:23:05.694 0, 9, 00:23:05.694 NS: 0x200003af0ec0 I/O qp, Total commands completed: 790268, total successful commands: 4598, random_seed: 701434176 00:23:05.694 NS: 0x200003af0ec0 admin qp, Total commands completed: 100304, total successful commands: 823, random_seed: 1092459264 00:23:05.694 02:04:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:06.637 Fuzzing completed. Shutting down the fuzz application 00:23:06.637 00:23:06.637 Dumping successful admin opcodes: 00:23:06.637 24, 00:23:06.637 Dumping successful io opcodes: 00:23:06.637 00:23:06.637 NS: 0x200003af0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1591926166 00:23:06.637 NS: 0x200003af0ec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1592027208 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:06.637 rmmod nvme_rdma 00:23:06.637 rmmod nvme_fabrics 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 3290223 ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 3290223 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3290223 ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3290223 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3290223 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3290223' 00:23:06.637 killing process with pid 3290223 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3290223 00:23:06.637 02:04:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3290223 00:23:08.012 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:08.012 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:23:08.012 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:08.012 00:23:08.012 real 0m42.656s 00:23:08.012 user 0m55.855s 00:23:08.012 sys 0m19.603s 00:23:08.012 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:08.012 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:08.012 ************************************ 00:23:08.012 END TEST nvmf_fuzz 00:23:08.012 ************************************ 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:08.271 ************************************ 00:23:08.271 START TEST nvmf_multiconnection 00:23:08.271 ************************************ 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:08.271 * Looking for test storage... 00:23:08.271 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:23:08.271 02:04:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:23:08.271 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:08.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.272 --rc genhtml_branch_coverage=1 00:23:08.272 --rc genhtml_function_coverage=1 00:23:08.272 --rc genhtml_legend=1 00:23:08.272 --rc geninfo_all_blocks=1 00:23:08.272 --rc geninfo_unexecuted_blocks=1 00:23:08.272 00:23:08.272 ' 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:08.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.272 --rc genhtml_branch_coverage=1 00:23:08.272 --rc genhtml_function_coverage=1 00:23:08.272 --rc genhtml_legend=1 00:23:08.272 --rc geninfo_all_blocks=1 00:23:08.272 --rc geninfo_unexecuted_blocks=1 00:23:08.272 00:23:08.272 ' 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:08.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.272 --rc genhtml_branch_coverage=1 00:23:08.272 --rc genhtml_function_coverage=1 00:23:08.272 --rc genhtml_legend=1 00:23:08.272 --rc geninfo_all_blocks=1 00:23:08.272 --rc geninfo_unexecuted_blocks=1 00:23:08.272 00:23:08.272 ' 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:08.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.272 --rc genhtml_branch_coverage=1 00:23:08.272 --rc genhtml_function_coverage=1 00:23:08.272 --rc genhtml_legend=1 00:23:08.272 --rc geninfo_all_blocks=1 00:23:08.272 --rc geninfo_unexecuted_blocks=1 00:23:08.272 00:23:08.272 ' 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.272 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.530 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:23:08.530 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:23:08.530 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.531 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:23:08.531 02:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.095 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:23:15.096 Found 0000:18:00.0 (0x8086 - 0x159b) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:23:15.096 Found 0000:18:00.1 (0x8086 - 0x159b) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@403 -- # modinfo irdma 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:23:15.096 Found net devices under 0000:18:00.0: cvl_0_0 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:23:15.096 Found net devices under 0000:18:00.1: cvl_0_1 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # rdma_device_init 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@528 -- # allocate_nic_ips 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_0 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_1 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:15.096 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:23:15.096 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:15.096 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:23:15.096 altname enp24s0f0np0 00:23:15.096 altname ens785f0np0 00:23:15.096 inet 192.168.100.8/24 scope global cvl_0_0 00:23:15.097 valid_lft forever preferred_lft forever 00:23:15.097 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:23:15.097 valid_lft forever preferred_lft forever 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:23:15.097 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:15.097 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:23:15.097 altname enp24s0f1np1 00:23:15.097 altname ens785f1np1 00:23:15.097 inet 192.168.100.9/24 scope global cvl_0_1 00:23:15.097 valid_lft forever preferred_lft forever 00:23:15.097 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:23:15.097 valid_lft forever preferred_lft forever 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_0 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:23:15.097 192.168.100.9' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:23:15.097 192.168.100.9' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # head -n 1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:23:15.097 192.168.100.9' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # tail -n +2 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # head -n 1 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=3297653 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 3297653 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3297653 ']' 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.097 02:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.097 [2024-10-09 02:04:34.456860] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:23:15.097 [2024-10-09 02:04:34.456972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.097 [2024-10-09 02:04:34.587615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.097 [2024-10-09 02:04:34.781805] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.097 [2024-10-09 02:04:34.781871] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.097 [2024-10-09 02:04:34.781885] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.097 [2024-10-09 02:04:34.781899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.097 [2024-10-09 02:04:34.781910] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.097 [2024-10-09 02:04:34.784314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.097 [2024-10-09 02:04:34.784364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.097 [2024-10-09 02:04:34.784425] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.097 [2024-10-09 02:04:34.784433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 [2024-10-09 02:04:35.357614] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:23:15.664 [2024-10-09 02:04:35.367499] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:23:15.664 [2024-10-09 02:04:35.367544] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 Malloc1 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.664 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.664 [2024-10-09 02:04:35.481397] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 Malloc2 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 Malloc3 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.923 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 Malloc4 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 Malloc5 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 Malloc6 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.182 02:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 Malloc7 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 Malloc8 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.440 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 Malloc9 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 Malloc10 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 Malloc11 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.699 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:16.958 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:16.958 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:16.958 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:16.958 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:16.958 02:04:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:19.487 02:04:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.391 02:04:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:23:21.391 02:04:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:21.391 02:04:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:21.391 02:04:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.391 02:04:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:21.391 02:04:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:23.926 02:04:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.831 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:26.089 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:26.089 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:26.089 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:26.089 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:26.089 02:04:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.993 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:28.252 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:28.252 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:28.252 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.252 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:28.252 02:04:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:30.157 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:30.157 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:30.157 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:30.157 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:30.157 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.416 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:30.416 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.416 02:04:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:30.416 02:04:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:30.416 02:04:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:30.416 02:04:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:30.416 02:04:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:30.416 02:04:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:32.951 02:04:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.855 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:35.115 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:35.115 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:35.115 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:35.115 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:35.115 02:04:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:37.018 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:37.277 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:37.277 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:37.277 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:37.277 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:37.277 02:04:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:39.196 02:04:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:39.196 02:04:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:39.196 02:04:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:39.196 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:39.196 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.196 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:39.196 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:39.196 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:23:39.455 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:39.455 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.455 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.455 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:39.455 02:04:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:41.988 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:41.988 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:41.989 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:41.989 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:41.989 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.989 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:41.989 02:05:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:41.989 [global] 00:23:41.989 thread=1 00:23:41.989 invalidate=1 00:23:41.989 rw=read 00:23:41.989 time_based=1 00:23:41.989 runtime=10 00:23:41.989 ioengine=libaio 00:23:41.989 direct=1 00:23:41.989 bs=262144 00:23:41.989 iodepth=64 00:23:41.989 norandommap=1 00:23:41.989 numjobs=1 00:23:41.989 00:23:41.989 [job0] 00:23:41.989 filename=/dev/nvme0n1 00:23:41.989 [job1] 00:23:41.989 filename=/dev/nvme10n1 00:23:41.989 [job2] 00:23:41.989 filename=/dev/nvme1n1 00:23:41.989 [job3] 00:23:41.989 filename=/dev/nvme2n1 00:23:41.989 [job4] 00:23:41.989 filename=/dev/nvme3n1 00:23:41.989 [job5] 00:23:41.989 filename=/dev/nvme4n1 00:23:41.989 [job6] 00:23:41.989 filename=/dev/nvme5n1 00:23:41.989 [job7] 00:23:41.989 filename=/dev/nvme6n1 00:23:41.989 [job8] 00:23:41.989 filename=/dev/nvme7n1 00:23:41.989 [job9] 00:23:41.989 filename=/dev/nvme8n1 00:23:41.989 [job10] 00:23:41.989 filename=/dev/nvme9n1 00:23:41.989 Could not set queue depth (nvme0n1) 00:23:41.989 Could not set queue depth (nvme10n1) 00:23:41.989 Could not set queue depth (nvme1n1) 00:23:41.989 Could not set queue depth (nvme2n1) 00:23:41.989 Could not set queue depth (nvme3n1) 00:23:41.989 Could not set queue depth (nvme4n1) 00:23:41.989 Could not set queue depth (nvme5n1) 00:23:41.989 Could not set queue depth (nvme6n1) 00:23:41.989 Could not set queue depth (nvme7n1) 00:23:41.989 Could not set queue depth (nvme8n1) 00:23:41.989 Could not set queue depth (nvme9n1) 00:23:41.989 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:41.989 fio-3.35 00:23:41.989 Starting 11 threads 00:23:54.204 00:23:54.204 job0: (groupid=0, jobs=1): err= 0: pid=3301570: Wed Oct 9 02:05:12 2024 00:23:54.204 read: IOPS=1343, BW=336MiB/s (352MB/s)(3374MiB/10047msec) 00:23:54.204 slat (usec): min=10, max=26023, avg=696.40, stdev=1992.38 00:23:54.204 clat (msec): min=11, max=109, avg=46.89, stdev=24.29 00:23:54.204 lat (msec): min=11, max=116, avg=47.58, stdev=24.70 00:23:54.204 clat percentiles (msec): 00:23:54.204 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 16], 00:23:54.204 | 30.00th=[ 26], 40.00th=[ 45], 50.00th=[ 51], 60.00th=[ 61], 00:23:54.204 | 70.00th=[ 64], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 82], 00:23:54.204 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 103], 00:23:54.204 | 99.99th=[ 107] 00:23:54.204 bw ( KiB/s): min=191488, max=1000448, per=8.44%, avg=343884.80, stdev=229163.71, samples=20 00:23:54.204 iops : min= 748, max= 3908, avg=1343.30, stdev=895.17, samples=20 00:23:54.204 lat (msec) : 20=25.66%, 50=23.75%, 100=50.53%, 250=0.07% 00:23:54.204 cpu : usr=0.26%, sys=3.33%, ctx=3714, majf=0, minf=4097 00:23:54.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:54.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.204 issued rwts: total=13496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.204 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.204 job1: (groupid=0, jobs=1): err= 0: pid=3301571: Wed Oct 9 02:05:12 2024 00:23:54.204 read: IOPS=1972, BW=493MiB/s (517MB/s)(4945MiB/10025msec) 00:23:54.205 slat (usec): min=10, max=61092, avg=474.95, stdev=1905.71 00:23:54.205 clat (usec): min=1199, max=149654, avg=31926.38, stdev=18560.55 00:23:54.205 lat (usec): min=1212, max=149686, avg=32401.33, stdev=18878.53 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:23:54.205 | 30.00th=[ 21], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 30], 00:23:54.205 | 70.00th=[ 32], 80.00th=[ 44], 90.00th=[ 62], 95.00th=[ 75], 00:23:54.205 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 120], 99.95th=[ 124], 00:23:54.205 | 99.99th=[ 140] 00:23:54.205 bw ( KiB/s): min=198144, max=1077248, per=12.39%, avg=504729.60, stdev=237727.92, samples=20 00:23:54.205 iops : min= 774, max= 4208, avg=1971.60, stdev=928.62, samples=20 00:23:54.205 lat (msec) : 2=0.03%, 4=0.05%, 10=0.13%, 20=29.79%, 50=58.31% 00:23:54.205 lat (msec) : 100=11.43%, 250=0.26% 00:23:54.205 cpu : usr=0.38%, sys=3.60%, ctx=5435, majf=0, minf=3815 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=19779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job2: (groupid=0, jobs=1): err= 0: pid=3301572: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1835, BW=459MiB/s (481MB/s)(4600MiB/10024msec) 00:23:54.205 slat (usec): min=10, max=28717, avg=519.45, stdev=1539.22 00:23:54.205 clat (usec): min=807, max=109234, avg=34307.16, stdev=18734.35 00:23:54.205 lat (usec): min=837, max=109252, avg=34826.61, stdev=19055.51 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:23:54.205 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 31], 00:23:54.205 | 70.00th=[ 43], 80.00th=[ 51], 90.00th=[ 63], 95.00th=[ 71], 00:23:54.205 | 99.00th=[ 85], 99.50th=[ 91], 99.90th=[ 97], 99.95th=[ 105], 00:23:54.205 | 99.99th=[ 110] 00:23:54.205 bw ( KiB/s): min=201728, max=913920, per=11.52%, avg=469470.80, stdev=218975.35, samples=20 00:23:54.205 iops : min= 788, max= 3570, avg=1833.85, stdev=855.35, samples=20 00:23:54.205 lat (usec) : 1000=0.02% 00:23:54.205 lat (msec) : 2=0.19%, 4=0.95%, 10=1.91%, 20=23.58%, 50=53.29% 00:23:54.205 lat (msec) : 100=20.00%, 250=0.07% 00:23:54.205 cpu : usr=0.32%, sys=3.39%, ctx=5345, majf=0, minf=4097 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=18399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job3: (groupid=0, jobs=1): err= 0: pid=3301573: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1233, BW=308MiB/s (323MB/s)(3099MiB/10047msec) 00:23:54.205 slat (usec): min=10, max=42977, avg=726.02, stdev=2474.23 00:23:54.205 clat (usec): min=1720, max=122945, avg=51087.17, stdev=16566.96 00:23:54.205 lat (usec): min=1838, max=122987, avg=51813.19, stdev=16916.98 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 38], 00:23:54.205 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 47], 60.00th=[ 53], 00:23:54.205 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 81], 00:23:54.205 | 99.00th=[ 90], 99.50th=[ 93], 99.90th=[ 110], 99.95th=[ 118], 00:23:54.205 | 99.99th=[ 124] 00:23:54.205 bw ( KiB/s): min=207360, max=526336, per=7.75%, avg=315673.60, stdev=92726.62, samples=20 00:23:54.205 iops : min= 810, max= 2056, avg=1233.10, stdev=362.21, samples=20 00:23:54.205 lat (msec) : 2=0.02%, 4=0.04%, 10=0.15%, 20=0.72%, 50=57.23% 00:23:54.205 lat (msec) : 100=41.63%, 250=0.21% 00:23:54.205 cpu : usr=0.32%, sys=3.33%, ctx=3712, majf=0, minf=4097 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=12394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job4: (groupid=0, jobs=1): err= 0: pid=3301574: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1093, BW=273MiB/s (287MB/s)(2744MiB/10038msec) 00:23:54.205 slat (usec): min=10, max=33615, avg=866.87, stdev=2535.46 00:23:54.205 clat (usec): min=1534, max=121216, avg=57584.37, stdev=16431.97 00:23:54.205 lat (usec): min=1558, max=121292, avg=58451.24, stdev=16811.11 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 38], 20.00th=[ 45], 00:23:54.205 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 60], 60.00th=[ 63], 00:23:54.205 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 85], 00:23:54.205 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 115], 99.95th=[ 117], 00:23:54.205 | 99.99th=[ 122] 00:23:54.205 bw ( KiB/s): min=196608, max=441856, per=6.86%, avg=279398.40, stdev=72277.30, samples=20 00:23:54.205 iops : min= 768, max= 1726, avg=1091.40, stdev=282.33, samples=20 00:23:54.205 lat (msec) : 2=0.03%, 4=0.14%, 10=0.25%, 20=0.57%, 50=36.84% 00:23:54.205 lat (msec) : 100=61.73%, 250=0.45% 00:23:54.205 cpu : usr=0.24%, sys=2.67%, ctx=3011, majf=0, minf=4097 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=10977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job5: (groupid=0, jobs=1): err= 0: pid=3301575: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1085, BW=271MiB/s (285MB/s)(2726MiB/10046msec) 00:23:54.205 slat (usec): min=10, max=52302, avg=809.18, stdev=2938.19 00:23:54.205 clat (usec): min=1191, max=135621, avg=58083.84, stdev=18683.68 00:23:54.205 lat (usec): min=1204, max=135678, avg=58893.01, stdev=19108.97 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 45], 00:23:54.205 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 64], 00:23:54.205 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 81], 95.00th=[ 86], 00:23:54.205 | 99.00th=[ 96], 99.50th=[ 104], 99.90th=[ 127], 99.95th=[ 133], 00:23:54.205 | 99.99th=[ 136] 00:23:54.205 bw ( KiB/s): min=178688, max=422912, per=6.81%, avg=277529.60, stdev=67378.41, samples=20 00:23:54.205 iops : min= 698, max= 1652, avg=1084.10, stdev=263.20, samples=20 00:23:54.205 lat (msec) : 2=0.28%, 4=0.32%, 10=1.12%, 20=3.38%, 50=32.02% 00:23:54.205 lat (msec) : 100=62.27%, 250=0.60% 00:23:54.205 cpu : usr=0.21%, sys=2.82%, ctx=3638, majf=0, minf=4097 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=10904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job6: (groupid=0, jobs=1): err= 0: pid=3301576: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1355, BW=339MiB/s (355MB/s)(3403MiB/10045msec) 00:23:54.205 slat (usec): min=10, max=46916, avg=659.09, stdev=2380.72 00:23:54.205 clat (usec): min=300, max=131939, avg=46520.38, stdev=17572.47 00:23:54.205 lat (usec): min=343, max=133665, avg=47179.47, stdev=17952.46 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 30], 00:23:54.205 | 30.00th=[ 38], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 54], 00:23:54.205 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 74], 00:23:54.205 | 99.00th=[ 89], 99.50th=[ 92], 99.90th=[ 104], 99.95th=[ 106], 00:23:54.205 | 99.99th=[ 132] 00:23:54.205 bw ( KiB/s): min=244224, max=572928, per=8.51%, avg=346820.30, stdev=88817.07, samples=20 00:23:54.205 iops : min= 954, max= 2238, avg=1354.75, stdev=346.94, samples=20 00:23:54.205 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.01% 00:23:54.205 lat (msec) : 2=0.08%, 4=0.31%, 10=0.90%, 20=6.58%, 50=48.88% 00:23:54.205 lat (msec) : 100=42.95%, 250=0.24% 00:23:54.205 cpu : usr=0.31%, sys=3.98%, ctx=4347, majf=0, minf=4097 00:23:54.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.205 issued rwts: total=13612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.205 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.205 job7: (groupid=0, jobs=1): err= 0: pid=3301577: Wed Oct 9 02:05:12 2024 00:23:54.205 read: IOPS=1099, BW=275MiB/s (288MB/s)(2757MiB/10034msec) 00:23:54.205 slat (usec): min=10, max=58391, avg=836.74, stdev=3013.38 00:23:54.205 clat (msec): min=14, max=142, avg=57.33, stdev=16.51 00:23:54.205 lat (msec): min=14, max=142, avg=58.17, stdev=16.96 00:23:54.205 clat percentiles (msec): 00:23:54.205 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 46], 00:23:54.205 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 63], 00:23:54.205 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 85], 00:23:54.205 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 129], 99.95th=[ 133], 00:23:54.205 | 99.99th=[ 138] 00:23:54.205 bw ( KiB/s): min=173915, max=422912, per=6.89%, avg=280696.50, stdev=66578.07, samples=20 00:23:54.205 iops : min= 679, max= 1652, avg=1096.45, stdev=260.10, samples=20 00:23:54.205 lat (msec) : 20=0.87%, 50=38.32%, 100=60.26%, 250=0.55% 00:23:54.206 cpu : usr=0.35%, sys=3.15%, ctx=3304, majf=0, minf=4097 00:23:54.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.206 issued rwts: total=11028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.206 job8: (groupid=0, jobs=1): err= 0: pid=3301578: Wed Oct 9 02:05:12 2024 00:23:54.206 read: IOPS=2184, BW=546MiB/s (573MB/s)(5482MiB/10037msec) 00:23:54.206 slat (usec): min=10, max=61152, avg=424.13, stdev=1784.33 00:23:54.206 clat (usec): min=1780, max=153598, avg=28833.18, stdev=21507.91 00:23:54.206 lat (usec): min=1804, max=154540, avg=29257.30, stdev=21860.53 00:23:54.206 clat percentiles (msec): 00:23:54.206 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:23:54.206 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 16], 00:23:54.206 | 70.00th=[ 42], 80.00th=[ 53], 90.00th=[ 62], 95.00th=[ 71], 00:23:54.206 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 132], 00:23:54.206 | 99.99th=[ 155] 00:23:54.206 bw ( KiB/s): min=218112, max=1103360, per=13.74%, avg=559718.40, stdev=378990.31, samples=20 00:23:54.206 iops : min= 852, max= 4310, avg=2186.40, stdev=1480.43, samples=20 00:23:54.206 lat (msec) : 2=0.01%, 4=0.03%, 10=0.15%, 20=66.09%, 50=12.54% 00:23:54.206 lat (msec) : 100=21.08%, 250=0.10% 00:23:54.206 cpu : usr=0.33%, sys=3.91%, ctx=6115, majf=0, minf=4097 00:23:54.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.206 issued rwts: total=21927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.206 job9: (groupid=0, jobs=1): err= 0: pid=3301579: Wed Oct 9 02:05:12 2024 00:23:54.206 read: IOPS=1316, BW=329MiB/s (345MB/s)(3300MiB/10026msec) 00:23:54.206 slat (usec): min=10, max=46734, avg=735.55, stdev=1973.44 00:23:54.206 clat (msec): min=8, max=119, avg=47.82, stdev=15.92 00:23:54.206 lat (msec): min=8, max=119, avg=48.56, stdev=16.24 00:23:54.206 clat percentiles (usec): 00:23:54.206 | 1.00th=[19792], 5.00th=[29230], 10.00th=[30016], 20.00th=[31065], 00:23:54.206 | 30.00th=[32900], 40.00th=[42730], 50.00th=[45351], 60.00th=[53216], 00:23:54.206 | 70.00th=[58459], 80.00th=[61080], 90.00th=[69731], 95.00th=[73925], 00:23:54.206 | 99.00th=[85459], 99.50th=[89654], 99.90th=[94897], 99.95th=[94897], 00:23:54.206 | 99.99th=[99091] 00:23:54.206 bw ( KiB/s): min=207360, max=531968, per=8.25%, avg=336272.35, stdev=98302.73, samples=20 00:23:54.206 iops : min= 810, max= 2078, avg=1313.55, stdev=383.99, samples=20 00:23:54.206 lat (msec) : 10=0.08%, 20=1.00%, 50=55.68%, 100=43.23%, 250=0.01% 00:23:54.206 cpu : usr=0.36%, sys=3.83%, ctx=3202, majf=0, minf=4097 00:23:54.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.206 issued rwts: total=13200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.206 job10: (groupid=0, jobs=1): err= 0: pid=3301580: Wed Oct 9 02:05:12 2024 00:23:54.206 read: IOPS=1413, BW=353MiB/s (371MB/s)(3543MiB/10025msec) 00:23:54.206 slat (usec): min=10, max=50673, avg=677.39, stdev=2099.32 00:23:54.206 clat (usec): min=571, max=120686, avg=44551.63, stdev=17584.23 00:23:54.206 lat (usec): min=618, max=124968, avg=45229.02, stdev=17927.00 00:23:54.206 clat percentiles (msec): 00:23:54.206 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 31], 00:23:54.206 | 30.00th=[ 32], 40.00th=[ 38], 50.00th=[ 44], 60.00th=[ 47], 00:23:54.206 | 70.00th=[ 57], 80.00th=[ 60], 90.00th=[ 68], 95.00th=[ 74], 00:23:54.206 | 99.00th=[ 87], 99.50th=[ 91], 99.90th=[ 103], 99.95th=[ 109], 00:23:54.206 | 99.99th=[ 121] 00:23:54.206 bw ( KiB/s): min=201216, max=530944, per=8.87%, avg=361199.05, stdev=98357.49, samples=20 00:23:54.206 iops : min= 786, max= 2074, avg=1410.90, stdev=384.22, samples=20 00:23:54.206 lat (usec) : 750=0.06%, 1000=0.07% 00:23:54.206 lat (msec) : 2=0.71%, 4=0.73%, 10=1.23%, 20=4.08%, 50=56.55% 00:23:54.206 lat (msec) : 100=36.46%, 250=0.11% 00:23:54.206 cpu : usr=0.31%, sys=3.53%, ctx=3960, majf=0, minf=4097 00:23:54.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:54.206 issued rwts: total=14171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:54.206 00:23:54.206 Run status group 0 (all jobs): 00:23:54.206 READ: bw=3978MiB/s (4172MB/s), 271MiB/s-546MiB/s (285MB/s-573MB/s), io=39.0GiB (41.9GB), run=10024-10047msec 00:23:54.206 00:23:54.206 Disk stats (read/write): 00:23:54.206 nvme0n1: ios=26561/0, merge=0/0, ticks=1213128/0, in_queue=1213128, util=96.47% 00:23:54.206 nvme10n1: ios=38818/0, merge=0/0, ticks=1209367/0, in_queue=1209367, util=96.75% 00:23:54.206 nvme1n1: ios=36051/0, merge=0/0, ticks=1209354/0, in_queue=1209354, util=97.11% 00:23:54.206 nvme2n1: ios=24319/0, merge=0/0, ticks=1213886/0, in_queue=1213886, util=97.34% 00:23:54.206 nvme3n1: ios=21478/0, merge=0/0, ticks=1210352/0, in_queue=1210352, util=97.45% 00:23:54.206 nvme4n1: ios=21395/0, merge=0/0, ticks=1216191/0, in_queue=1216191, util=97.90% 00:23:54.206 nvme5n1: ios=26786/0, merge=0/0, ticks=1213637/0, in_queue=1213637, util=98.10% 00:23:54.206 nvme6n1: ios=21567/0, merge=0/0, ticks=1211589/0, in_queue=1211589, util=98.26% 00:23:54.206 nvme7n1: ios=43358/0, merge=0/0, ticks=1209290/0, in_queue=1209290, util=98.82% 00:23:54.206 nvme8n1: ios=25674/0, merge=0/0, ticks=1213045/0, in_queue=1213045, util=99.07% 00:23:54.206 nvme9n1: ios=27612/0, merge=0/0, ticks=1213099/0, in_queue=1213099, util=99.23% 00:23:54.206 02:05:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:54.206 [global] 00:23:54.206 thread=1 00:23:54.206 invalidate=1 00:23:54.206 rw=randwrite 00:23:54.206 time_based=1 00:23:54.206 runtime=10 00:23:54.206 ioengine=libaio 00:23:54.206 direct=1 00:23:54.206 bs=262144 00:23:54.206 iodepth=64 00:23:54.206 norandommap=1 00:23:54.206 numjobs=1 00:23:54.206 00:23:54.206 [job0] 00:23:54.206 filename=/dev/nvme0n1 00:23:54.206 [job1] 00:23:54.206 filename=/dev/nvme10n1 00:23:54.206 [job2] 00:23:54.206 filename=/dev/nvme1n1 00:23:54.206 [job3] 00:23:54.206 filename=/dev/nvme2n1 00:23:54.206 [job4] 00:23:54.206 filename=/dev/nvme3n1 00:23:54.206 [job5] 00:23:54.206 filename=/dev/nvme4n1 00:23:54.206 [job6] 00:23:54.206 filename=/dev/nvme5n1 00:23:54.206 [job7] 00:23:54.206 filename=/dev/nvme6n1 00:23:54.206 [job8] 00:23:54.206 filename=/dev/nvme7n1 00:23:54.206 [job9] 00:23:54.206 filename=/dev/nvme8n1 00:23:54.206 [job10] 00:23:54.206 filename=/dev/nvme9n1 00:23:54.206 Could not set queue depth (nvme0n1) 00:23:54.206 Could not set queue depth (nvme10n1) 00:23:54.206 Could not set queue depth (nvme1n1) 00:23:54.206 Could not set queue depth (nvme2n1) 00:23:54.206 Could not set queue depth (nvme3n1) 00:23:54.206 Could not set queue depth (nvme4n1) 00:23:54.206 Could not set queue depth (nvme5n1) 00:23:54.206 Could not set queue depth (nvme6n1) 00:23:54.206 Could not set queue depth (nvme7n1) 00:23:54.206 Could not set queue depth (nvme8n1) 00:23:54.206 Could not set queue depth (nvme9n1) 00:23:54.206 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:54.206 fio-3.35 00:23:54.206 Starting 11 threads 00:24:04.337 00:24:04.337 job0: (groupid=0, jobs=1): err= 0: pid=3303011: Wed Oct 9 02:05:23 2024 00:24:04.337 write: IOPS=1173, BW=293MiB/s (308MB/s)(2954MiB/10068msec); 0 zone resets 00:24:04.337 slat (usec): min=26, max=75259, avg=790.39, stdev=2829.35 00:24:04.337 clat (msec): min=6, max=187, avg=53.73, stdev=24.33 00:24:04.337 lat (msec): min=7, max=187, avg=54.52, stdev=24.78 00:24:04.337 clat percentiles (msec): 00:24:04.337 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 34], 00:24:04.337 | 30.00th=[ 37], 40.00th=[ 44], 50.00th=[ 49], 60.00th=[ 53], 00:24:04.337 | 70.00th=[ 59], 80.00th=[ 70], 90.00th=[ 88], 95.00th=[ 110], 00:24:04.337 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 171], 00:24:04.337 | 99.99th=[ 188] 00:24:04.337 bw ( KiB/s): min=135680, max=540160, per=8.66%, avg=300857.15, stdev=110595.00, samples=20 00:24:04.337 iops : min= 530, max= 2110, avg=1175.20, stdev=432.01, samples=20 00:24:04.337 lat (msec) : 10=0.03%, 20=2.56%, 50=52.31%, 100=38.58%, 250=6.51% 00:24:04.337 cpu : usr=3.82%, sys=3.85%, ctx=2620, majf=0, minf=158 00:24:04.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,11814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job1: (groupid=0, jobs=1): err= 0: pid=3303023: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=971, BW=243MiB/s (255MB/s)(2445MiB/10068msec); 0 zone resets 00:24:04.338 slat (usec): min=23, max=69906, avg=769.95, stdev=3107.16 00:24:04.338 clat (usec): min=752, max=203282, avg=65105.82, stdev=30064.43 00:24:04.338 lat (usec): min=823, max=203350, avg=65875.76, stdev=30634.54 00:24:04.338 clat percentiles (usec): 00:24:04.338 | 1.00th=[ 1598], 5.00th=[ 8356], 10.00th=[ 23725], 20.00th=[ 36439], 00:24:04.338 | 30.00th=[ 51643], 40.00th=[ 60031], 50.00th=[ 67634], 60.00th=[ 72877], 00:24:04.338 | 70.00th=[ 80217], 80.00th=[ 89654], 90.00th=[102237], 95.00th=[111674], 00:24:04.338 | 99.00th=[135267], 99.50th=[143655], 99.90th=[152044], 99.95th=[154141], 00:24:04.338 | 99.99th=[202376] 00:24:04.338 bw ( KiB/s): min=140800, max=433664, per=7.16%, avg=248728.75, stdev=78626.28, samples=20 00:24:04.338 iops : min= 550, max= 1694, avg=971.55, stdev=307.13, samples=20 00:24:04.338 lat (usec) : 1000=0.08% 00:24:04.338 lat (msec) : 2=1.62%, 4=1.01%, 10=2.67%, 20=3.15%, 50=20.06% 00:24:04.338 lat (msec) : 100=59.59%, 250=11.82% 00:24:04.338 cpu : usr=2.23%, sys=3.94%, ctx=2668, majf=0, minf=199 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,9778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job2: (groupid=0, jobs=1): err= 0: pid=3303025: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=980, BW=245MiB/s (257MB/s)(2462MiB/10045msec); 0 zone resets 00:24:04.338 slat (usec): min=24, max=86945, avg=719.46, stdev=2997.25 00:24:04.338 clat (msec): min=4, max=159, avg=64.55, stdev=24.61 00:24:04.338 lat (msec): min=4, max=173, avg=65.27, stdev=25.06 00:24:04.338 clat percentiles (msec): 00:24:04.338 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 44], 00:24:04.338 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 62], 60.00th=[ 69], 00:24:04.338 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 109], 00:24:04.338 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 153], 00:24:04.338 | 99.99th=[ 159] 00:24:04.338 bw ( KiB/s): min=154112, max=448512, per=7.21%, avg=250470.40, stdev=77087.55, samples=20 00:24:04.338 iops : min= 602, max= 1752, avg=978.40, stdev=301.12, samples=20 00:24:04.338 lat (msec) : 10=0.20%, 20=1.04%, 50=28.72%, 100=61.63%, 250=8.41% 00:24:04.338 cpu : usr=2.44%, sys=3.81%, ctx=2787, majf=0, minf=148 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,9847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job3: (groupid=0, jobs=1): err= 0: pid=3303026: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=1149, BW=287MiB/s (301MB/s)(2895MiB/10069msec); 0 zone resets 00:24:04.338 slat (usec): min=21, max=94154, avg=724.14, stdev=2749.04 00:24:04.338 clat (usec): min=456, max=156016, avg=54910.17, stdev=31803.42 00:24:04.338 lat (usec): min=690, max=181071, avg=55634.32, stdev=32316.26 00:24:04.338 clat percentiles (usec): 00:24:04.338 | 1.00th=[ 1303], 5.00th=[ 2933], 10.00th=[ 15008], 20.00th=[ 24249], 00:24:04.338 | 30.00th=[ 34341], 40.00th=[ 44303], 50.00th=[ 54264], 60.00th=[ 64226], 00:24:04.338 | 70.00th=[ 73925], 80.00th=[ 82314], 90.00th=[ 95945], 95.00th=[110625], 00:24:04.338 | 99.00th=[130548], 99.50th=[139461], 99.90th=[145753], 99.95th=[152044], 00:24:04.338 | 99.99th=[156238] 00:24:04.338 bw ( KiB/s): min=139264, max=604160, per=8.49%, avg=294809.60, stdev=113280.58, samples=20 00:24:04.338 iops : min= 544, max= 2360, avg=1151.60, stdev=442.50, samples=20 00:24:04.338 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.22% 00:24:04.338 lat (msec) : 2=3.45%, 4=2.53%, 10=2.99%, 20=6.01%, 50=29.99% 00:24:04.338 lat (msec) : 100=46.11%, 250=8.62% 00:24:04.338 cpu : usr=2.88%, sys=4.28%, ctx=2946, majf=0, minf=23 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,11579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job4: (groupid=0, jobs=1): err= 0: pid=3303030: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=1396, BW=349MiB/s (366MB/s)(3495MiB/10013msec); 0 zone resets 00:24:04.338 slat (usec): min=24, max=75913, avg=631.82, stdev=2521.46 00:24:04.338 clat (usec): min=809, max=172812, avg=45194.96, stdev=26968.41 00:24:04.338 lat (usec): min=875, max=188758, avg=45826.78, stdev=27439.38 00:24:04.338 clat percentiles (msec): 00:24:04.338 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 20], 00:24:04.338 | 30.00th=[ 29], 40.00th=[ 33], 50.00th=[ 36], 60.00th=[ 48], 00:24:04.338 | 70.00th=[ 57], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 97], 00:24:04.338 | 99.00th=[ 124], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 161], 00:24:04.338 | 99.99th=[ 174] 00:24:04.338 bw ( KiB/s): min=144896, max=740352, per=10.25%, avg=356249.60, stdev=180570.20, samples=20 00:24:04.338 iops : min= 566, max= 2892, avg=1391.60, stdev=705.35, samples=20 00:24:04.338 lat (usec) : 1000=0.03% 00:24:04.338 lat (msec) : 2=0.35%, 4=0.46%, 10=1.29%, 20=18.59%, 50=42.82% 00:24:04.338 lat (msec) : 100=32.34%, 250=4.11% 00:24:04.338 cpu : usr=4.00%, sys=4.24%, ctx=3008, majf=0, minf=417 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,13979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job5: (groupid=0, jobs=1): err= 0: pid=3303031: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=1216, BW=304MiB/s (319MB/s)(3055MiB/10042msec); 0 zone resets 00:24:04.338 slat (usec): min=19, max=44770, avg=546.43, stdev=2352.01 00:24:04.338 clat (usec): min=482, max=148038, avg=52020.25, stdev=25546.51 00:24:04.338 lat (usec): min=515, max=174071, avg=52566.68, stdev=25965.24 00:24:04.338 clat percentiles (msec): 00:24:04.338 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 34], 00:24:04.338 | 30.00th=[ 39], 40.00th=[ 45], 50.00th=[ 50], 60.00th=[ 55], 00:24:04.338 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 102], 00:24:04.338 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 142], 00:24:04.338 | 99.99th=[ 146] 00:24:04.338 bw ( KiB/s): min=155136, max=540672, per=8.96%, avg=311244.80, stdev=102611.44, samples=20 00:24:04.338 iops : min= 606, max= 2112, avg=1215.80, stdev=400.83, samples=20 00:24:04.338 lat (usec) : 500=0.02%, 750=0.11%, 1000=0.10% 00:24:04.338 lat (msec) : 2=0.49%, 4=1.04%, 10=3.15%, 20=6.01%, 50=40.68% 00:24:04.338 lat (msec) : 100=43.38%, 250=5.03% 00:24:04.338 cpu : usr=2.82%, sys=4.83%, ctx=3409, majf=0, minf=72 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,12221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job6: (groupid=0, jobs=1): err= 0: pid=3303032: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=966, BW=242MiB/s (253MB/s)(2433MiB/10066msec); 0 zone resets 00:24:04.338 slat (usec): min=25, max=85229, avg=717.13, stdev=3340.19 00:24:04.338 clat (usec): min=733, max=170875, avg=65448.15, stdev=28605.39 00:24:04.338 lat (usec): min=815, max=223123, avg=66165.29, stdev=29163.48 00:24:04.338 clat percentiles (msec): 00:24:04.338 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 38], 00:24:04.338 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 73], 00:24:04.338 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 104], 95.00th=[ 112], 00:24:04.338 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 150], 00:24:04.338 | 99.99th=[ 171] 00:24:04.338 bw ( KiB/s): min=151552, max=389120, per=7.12%, avg=247526.40, stdev=53955.46, samples=20 00:24:04.338 iops : min= 592, max= 1520, avg=966.90, stdev=210.76, samples=20 00:24:04.338 lat (usec) : 750=0.01%, 1000=0.04% 00:24:04.338 lat (msec) : 2=0.40%, 4=0.79%, 10=1.26%, 20=2.59%, 50=25.59% 00:24:04.338 lat (msec) : 100=57.55%, 250=11.77% 00:24:04.338 cpu : usr=2.49%, sys=3.78%, ctx=2840, majf=0, minf=11 00:24:04.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:04.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.338 issued rwts: total=0,9732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.338 job7: (groupid=0, jobs=1): err= 0: pid=3303033: Wed Oct 9 02:05:23 2024 00:24:04.338 write: IOPS=1303, BW=326MiB/s (342MB/s)(3269MiB/10030msec); 0 zone resets 00:24:04.338 slat (usec): min=14, max=87590, avg=459.75, stdev=2726.11 00:24:04.338 clat (usec): min=384, max=195334, avg=48615.75, stdev=29068.21 00:24:04.338 lat (usec): min=427, max=195437, avg=49075.50, stdev=29487.29 00:24:04.338 clat percentiles (usec): 00:24:04.338 | 1.00th=[ 1385], 5.00th=[ 5145], 10.00th=[ 11731], 20.00th=[ 19530], 00:24:04.338 | 30.00th=[ 28705], 40.00th=[ 38011], 50.00th=[ 49021], 60.00th=[ 55313], 00:24:04.338 | 70.00th=[ 63177], 80.00th=[ 72877], 90.00th=[ 88605], 95.00th=[ 95945], 00:24:04.338 | 99.00th=[123208], 99.50th=[137364], 99.90th=[143655], 99.95th=[143655], 00:24:04.338 | 99.99th=[196084] 00:24:04.338 bw ( KiB/s): min=207360, max=539136, per=9.59%, avg=333107.20, stdev=98115.24, samples=20 00:24:04.338 iops : min= 810, max= 2106, avg=1301.20, stdev=383.26, samples=20 00:24:04.339 lat (usec) : 500=0.18%, 750=0.28%, 1000=0.11% 00:24:04.339 lat (msec) : 2=1.09%, 4=1.87%, 10=5.15%, 20=11.82%, 50=31.15% 00:24:04.339 lat (msec) : 100=44.13%, 250=4.21% 00:24:04.339 cpu : usr=3.26%, sys=4.53%, ctx=3686, majf=0, minf=13 00:24:04.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:04.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.339 issued rwts: total=0,13075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.339 job8: (groupid=0, jobs=1): err= 0: pid=3303034: Wed Oct 9 02:05:23 2024 00:24:04.339 write: IOPS=1516, BW=379MiB/s (398MB/s)(3808MiB/10042msec); 0 zone resets 00:24:04.339 slat (usec): min=32, max=46324, avg=638.41, stdev=1988.93 00:24:04.339 clat (usec): min=810, max=130322, avg=41534.37, stdev=23382.92 00:24:04.339 lat (usec): min=957, max=137185, avg=42172.78, stdev=23786.48 00:24:04.339 clat percentiles (msec): 00:24:04.339 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 20], 00:24:04.339 | 30.00th=[ 24], 40.00th=[ 32], 50.00th=[ 36], 60.00th=[ 41], 00:24:04.339 | 70.00th=[ 52], 80.00th=[ 64], 90.00th=[ 77], 95.00th=[ 89], 00:24:04.339 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 115], 99.95th=[ 121], 00:24:04.339 | 99.99th=[ 131] 00:24:04.339 bw ( KiB/s): min=185856, max=786944, per=11.18%, avg=388326.40, stdev=179664.78, samples=20 00:24:04.339 iops : min= 726, max= 3074, avg=1516.90, stdev=701.82, samples=20 00:24:04.339 lat (usec) : 1000=0.02% 00:24:04.339 lat (msec) : 2=0.37%, 4=0.68%, 10=0.65%, 20=20.87%, 50=45.53% 00:24:04.339 lat (msec) : 100=30.32%, 250=1.56% 00:24:04.339 cpu : usr=5.07%, sys=4.85%, ctx=2945, majf=0, minf=18 00:24:04.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:04.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.339 issued rwts: total=0,15232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.339 job9: (groupid=0, jobs=1): err= 0: pid=3303035: Wed Oct 9 02:05:23 2024 00:24:04.339 write: IOPS=1095, BW=274MiB/s (287MB/s)(2758MiB/10066msec); 0 zone resets 00:24:04.339 slat (usec): min=21, max=74370, avg=610.33, stdev=2776.76 00:24:04.339 clat (usec): min=599, max=159659, avg=57768.64, stdev=28280.41 00:24:04.339 lat (usec): min=684, max=184460, avg=58378.97, stdev=28723.31 00:24:04.339 clat percentiles (msec): 00:24:04.339 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 22], 20.00th=[ 33], 00:24:04.339 | 30.00th=[ 41], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 65], 00:24:04.339 | 70.00th=[ 71], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 109], 00:24:04.339 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 157], 00:24:04.339 | 99.99th=[ 161] 00:24:04.339 bw ( KiB/s): min=173056, max=435712, per=8.08%, avg=280729.60, stdev=76623.40, samples=20 00:24:04.339 iops : min= 676, max= 1702, avg=1096.60, stdev=299.31, samples=20 00:24:04.339 lat (usec) : 750=0.03%, 1000=0.05% 00:24:04.339 lat (msec) : 2=0.57%, 4=0.43%, 10=2.10%, 20=5.60%, 50=31.88% 00:24:04.339 lat (msec) : 100=50.39%, 250=8.96% 00:24:04.339 cpu : usr=2.75%, sys=4.20%, ctx=3045, majf=0, minf=9 00:24:04.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:04.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.339 issued rwts: total=0,11030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.339 job10: (groupid=0, jobs=1): err= 0: pid=3303036: Wed Oct 9 02:05:23 2024 00:24:04.339 write: IOPS=1829, BW=457MiB/s (480MB/s)(4589MiB/10031msec); 0 zone resets 00:24:04.339 slat (usec): min=27, max=25771, avg=511.55, stdev=1506.96 00:24:04.339 clat (usec): min=721, max=132314, avg=34449.08, stdev=18236.70 00:24:04.339 lat (usec): min=793, max=137192, avg=34960.62, stdev=18516.58 00:24:04.339 clat percentiles (msec): 00:24:04.339 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:24:04.339 | 30.00th=[ 20], 40.00th=[ 26], 50.00th=[ 33], 60.00th=[ 36], 00:24:04.339 | 70.00th=[ 43], 80.00th=[ 50], 90.00th=[ 58], 95.00th=[ 69], 00:24:04.339 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 128], 99.95th=[ 131], 00:24:04.339 | 99.99th=[ 133] 00:24:04.339 bw ( KiB/s): min=230912, max=900096, per=13.48%, avg=468275.20, stdev=179742.08, samples=20 00:24:04.339 iops : min= 902, max= 3516, avg=1829.20, stdev=702.12, samples=20 00:24:04.339 lat (usec) : 750=0.01%, 1000=0.02% 00:24:04.339 lat (msec) : 2=0.10%, 4=0.33%, 10=1.77%, 20=29.92%, 50=49.10% 00:24:04.339 lat (msec) : 100=18.35%, 250=0.40% 00:24:04.339 cpu : usr=5.61%, sys=5.96%, ctx=3436, majf=0, minf=1436 00:24:04.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:04.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:04.339 issued rwts: total=0,18355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:04.339 00:24:04.339 Run status group 0 (all jobs): 00:24:04.339 WRITE: bw=3393MiB/s (3557MB/s), 242MiB/s-457MiB/s (253MB/s-480MB/s), io=33.4GiB (35.8GB), run=10013-10069msec 00:24:04.339 00:24:04.339 Disk stats (read/write): 00:24:04.339 nvme0n1: ios=49/23410, merge=0/0, ticks=5/1226409, in_queue=1226414, util=97.31% 00:24:04.339 nvme10n1: ios=0/19341, merge=0/0, ticks=0/1232162, in_queue=1232162, util=97.41% 00:24:04.339 nvme1n1: ios=0/19374, merge=0/0, ticks=0/1236496, in_queue=1236496, util=97.70% 00:24:04.339 nvme2n1: ios=0/22917, merge=0/0, ticks=0/1228295, in_queue=1228295, util=97.84% 00:24:04.339 nvme3n1: ios=0/27385, merge=0/0, ticks=0/1230335, in_queue=1230335, util=97.87% 00:24:04.339 nvme4n1: ios=0/24120, merge=0/0, ticks=0/1238746, in_queue=1238746, util=98.18% 00:24:04.339 nvme5n1: ios=0/19252, merge=0/0, ticks=0/1233965, in_queue=1233965, util=98.33% 00:24:04.339 nvme6n1: ios=0/25806, merge=0/0, ticks=0/1238709, in_queue=1238709, util=98.42% 00:24:04.339 nvme7n1: ios=0/30156, merge=0/0, ticks=0/1225903, in_queue=1225903, util=98.76% 00:24:04.339 nvme8n1: ios=0/21821, merge=0/0, ticks=0/1239571, in_queue=1239571, util=98.92% 00:24:04.339 nvme9n1: ios=0/36320, merge=0/0, ticks=0/1226251, in_queue=1226251, util=99.03% 00:24:04.339 02:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:04.339 02:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:04.339 02:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.339 02:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:04.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.599 02:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:05.536 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:05.536 02:05:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:06.474 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.474 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:07.409 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.409 02:05:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:08.346 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.346 02:05:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:08.914 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:08.914 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.173 02:05:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:10.108 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.108 02:05:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:11.043 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.043 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.044 02:05:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:11.610 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:11.610 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:11.610 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:11.610 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:11.610 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.868 02:05:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:12.803 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.803 02:05:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:13.738 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:13.738 rmmod nvme_rdma 00:24:13.738 rmmod nvme_fabrics 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 3297653 ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 3297653 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3297653 ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3297653 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3297653 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3297653' 00:24:13.738 killing process with pid 3297653 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3297653 00:24:13.738 02:05:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3297653 00:24:17.025 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:17.025 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:24:17.025 00:24:17.025 real 1m8.844s 00:24:17.025 user 4m17.468s 00:24:17.026 sys 0m17.691s 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.026 ************************************ 00:24:17.026 END TEST nvmf_multiconnection 00:24:17.026 ************************************ 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.026 ************************************ 00:24:17.026 START TEST nvmf_initiator_timeout 00:24:17.026 ************************************ 00:24:17.026 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:17.286 * Looking for test storage... 00:24:17.286 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:17.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.286 --rc genhtml_branch_coverage=1 00:24:17.286 --rc genhtml_function_coverage=1 00:24:17.286 --rc genhtml_legend=1 00:24:17.286 --rc geninfo_all_blocks=1 00:24:17.286 --rc geninfo_unexecuted_blocks=1 00:24:17.286 00:24:17.286 ' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:17.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.286 --rc genhtml_branch_coverage=1 00:24:17.286 --rc genhtml_function_coverage=1 00:24:17.286 --rc genhtml_legend=1 00:24:17.286 --rc geninfo_all_blocks=1 00:24:17.286 --rc geninfo_unexecuted_blocks=1 00:24:17.286 00:24:17.286 ' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:17.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.286 --rc genhtml_branch_coverage=1 00:24:17.286 --rc genhtml_function_coverage=1 00:24:17.286 --rc genhtml_legend=1 00:24:17.286 --rc geninfo_all_blocks=1 00:24:17.286 --rc geninfo_unexecuted_blocks=1 00:24:17.286 00:24:17.286 ' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:17.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.286 --rc genhtml_branch_coverage=1 00:24:17.286 --rc genhtml_function_coverage=1 00:24:17.286 --rc genhtml_legend=1 00:24:17.286 --rc geninfo_all_blocks=1 00:24:17.286 --rc geninfo_unexecuted_blocks=1 00:24:17.286 00:24:17.286 ' 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.286 02:05:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.286 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.287 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.287 02:05:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:24:23.858 Found 0000:18:00.0 (0x8086 - 0x159b) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:24:23.858 Found 0000:18:00.1 (0x8086 - 0x159b) 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.858 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@403 -- # modinfo irdma 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:24:23.859 Found net devices under 0000:18:00.0: cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:24:23.859 Found net devices under 0000:18:00.1: cvl_0_1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # rdma_device_init 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@528 -- # allocate_nic_ips 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:24:23.859 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:23.859 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:24:23.859 altname enp24s0f0np0 00:24:23.859 altname ens785f0np0 00:24:23.859 inet 192.168.100.8/24 scope global cvl_0_0 00:24:23.859 valid_lft forever preferred_lft forever 00:24:23.859 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:24:23.859 valid_lft forever preferred_lft forever 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:24:23.859 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:23.859 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:24:23.859 altname enp24s0f1np1 00:24:23.859 altname ens785f1np1 00:24:23.859 inet 192.168.100.9/24 scope global cvl_0_1 00:24:23.859 valid_lft forever preferred_lft forever 00:24:23.859 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:24:23.859 valid_lft forever preferred_lft forever 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_0 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:23.859 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:24:23.860 192.168.100.9' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:24:23.860 192.168.100.9' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # head -n 1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:24:23.860 192.168.100.9' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # tail -n +2 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # head -n 1 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=3308794 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 3308794 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3308794 ']' 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.860 02:05:43 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 [2024-10-09 02:05:43.368020] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:24:23.860 [2024-10-09 02:05:43.368136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.860 [2024-10-09 02:05:43.501552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.119 [2024-10-09 02:05:43.695217] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.119 [2024-10-09 02:05:43.695277] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.119 [2024-10-09 02:05:43.695290] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.119 [2024-10-09 02:05:43.695306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.119 [2024-10-09 02:05:43.695316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.119 [2024-10-09 02:05:43.697601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.119 [2024-10-09 02:05:43.697666] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.119 [2024-10-09 02:05:43.697729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.119 [2024-10-09 02:05:43.697734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.379 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.379 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:24.379 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:24.379 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.379 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.638 Malloc0 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.638 Delay0 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.638 [2024-10-09 02:05:44.330184] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x61200002a1c0/0x617000007c40) succeed. 00:24:24.638 [2024-10-09 02:05:44.340314] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x61200002a340/0x617000007fc0) succeed. 00:24:24.638 [2024-10-09 02:05:44.340350] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.638 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.639 [2024-10-09 02:05:44.372862] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.639 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:24.898 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:24.898 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:24.898 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.898 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:24.898 02:05:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:26.806 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:26.806 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:26.806 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:26.806 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:26.806 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.807 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:27.067 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3309290 00:24:27.067 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:27.067 02:05:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:27.067 [global] 00:24:27.067 thread=1 00:24:27.067 invalidate=1 00:24:27.067 rw=write 00:24:27.067 time_based=1 00:24:27.067 runtime=60 00:24:27.067 ioengine=libaio 00:24:27.067 direct=1 00:24:27.067 bs=4096 00:24:27.067 iodepth=1 00:24:27.067 norandommap=0 00:24:27.067 numjobs=1 00:24:27.067 00:24:27.067 verify_dump=1 00:24:27.067 verify_backlog=512 00:24:27.067 verify_state_save=0 00:24:27.067 do_verify=1 00:24:27.067 verify=crc32c-intel 00:24:27.067 [job0] 00:24:27.067 filename=/dev/nvme0n1 00:24:27.067 Could not set queue depth (nvme0n1) 00:24:27.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:27.325 fio-3.35 00:24:27.325 Starting 1 thread 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.854 true 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.854 true 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.854 true 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.854 true 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.854 02:05:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.135 true 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.135 true 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.135 true 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.135 true 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:33.135 02:05:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3309290 00:25:29.351 00:25:29.351 job0: (groupid=0, jobs=1): err= 0: pid=3309441: Wed Oct 9 02:06:47 2024 00:25:29.351 read: IOPS=1148, BW=4594KiB/s (4704kB/s)(269MiB/60000msec) 00:25:29.351 slat (usec): min=5, max=13824, avg= 9.72, stdev=70.98 00:25:29.351 clat (usec): min=45, max=41551k, avg=724.12, stdev=158292.64 00:25:29.351 lat (usec): min=109, max=41551k, avg=733.84, stdev=158292.67 00:25:29.351 clat percentiles (usec): 00:25:29.351 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 117], 00:25:29.351 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 123], 00:25:29.351 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 129], 95.00th=[ 133], 00:25:29.351 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 151], 99.95th=[ 169], 00:25:29.351 | 99.99th=[ 255] 00:25:29.351 write: IOPS=1152, BW=4608KiB/s (4719kB/s)(270MiB/60000msec); 0 zone resets 00:25:29.351 slat (usec): min=7, max=333, avg=12.16, stdev= 2.80 00:25:29.351 clat (usec): min=62, max=789, avg=119.02, stdev= 7.25 00:25:29.351 lat (usec): min=108, max=802, avg=131.17, stdev= 7.98 00:25:29.351 clat percentiles (usec): 00:25:29.351 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 115], 00:25:29.351 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:25:29.351 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 127], 95.00th=[ 130], 00:25:29.351 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 161], 99.95th=[ 186], 00:25:29.351 | 99.99th=[ 269] 00:25:29.351 bw ( KiB/s): min= 4096, max=16384, per=100.00%, avg=14612.76, stdev=2352.50, samples=37 00:25:29.351 iops : min= 1024, max= 4096, avg=3653.19, stdev=588.13, samples=37 00:25:29.351 lat (usec) : 50=0.01%, 100=0.01%, 250=99.98%, 500=0.01%, 1000=0.01% 00:25:29.351 lat (msec) : >=2000=0.01% 00:25:29.351 cpu : usr=1.56%, sys=2.53%, ctx=138036, majf=0, minf=107 00:25:29.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.351 issued rwts: total=68904,69120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:29.351 00:25:29.351 Run status group 0 (all jobs): 00:25:29.351 READ: bw=4594KiB/s (4704kB/s), 4594KiB/s-4594KiB/s (4704kB/s-4704kB/s), io=269MiB (282MB), run=60000-60000msec 00:25:29.351 WRITE: bw=4608KiB/s (4719kB/s), 4608KiB/s-4608KiB/s (4719kB/s-4719kB/s), io=270MiB (283MB), run=60000-60000msec 00:25:29.351 00:25:29.351 Disk stats (read/write): 00:25:29.351 nvme0n1: ios=68944/68675, merge=0/0, ticks=7826/7677, in_queue=15503, util=99.87% 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:29.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:29.351 nvmf hotplug test: fio successful as expected 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.351 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:29.352 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:29.352 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:29.352 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:29.352 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:29.352 02:06:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:29.352 rmmod nvme_rdma 00:25:29.352 rmmod nvme_fabrics 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 3308794 ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 3308794 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3308794 ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3308794 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3308794 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3308794' 00:25:29.352 killing process with pid 3308794 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3308794 00:25:29.352 02:06:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3308794 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:25:29.918 00:25:29.918 real 1m12.820s 00:25:29.918 user 4m27.314s 00:25:29.918 sys 0m7.455s 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.918 ************************************ 00:25:29.918 END TEST nvmf_initiator_timeout 00:25:29.918 ************************************ 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:29.918 ************************************ 00:25:29.918 START TEST nvmf_srq_overwhelm 00:25:29.918 ************************************ 00:25:29.918 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:30.177 * Looking for test storage... 00:25:30.177 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lcov --version 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:25:30.177 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.178 --rc genhtml_branch_coverage=1 00:25:30.178 --rc genhtml_function_coverage=1 00:25:30.178 --rc genhtml_legend=1 00:25:30.178 --rc geninfo_all_blocks=1 00:25:30.178 --rc geninfo_unexecuted_blocks=1 00:25:30.178 00:25:30.178 ' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.178 --rc genhtml_branch_coverage=1 00:25:30.178 --rc genhtml_function_coverage=1 00:25:30.178 --rc genhtml_legend=1 00:25:30.178 --rc geninfo_all_blocks=1 00:25:30.178 --rc geninfo_unexecuted_blocks=1 00:25:30.178 00:25:30.178 ' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.178 --rc genhtml_branch_coverage=1 00:25:30.178 --rc genhtml_function_coverage=1 00:25:30.178 --rc genhtml_legend=1 00:25:30.178 --rc geninfo_all_blocks=1 00:25:30.178 --rc geninfo_unexecuted_blocks=1 00:25:30.178 00:25:30.178 ' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.178 --rc genhtml_branch_coverage=1 00:25:30.178 --rc genhtml_function_coverage=1 00:25:30.178 --rc genhtml_legend=1 00:25:30.178 --rc geninfo_all_blocks=1 00:25:30.178 --rc geninfo_unexecuted_blocks=1 00:25:30.178 00:25:30.178 ' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.178 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.178 02:06:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:25:36.743 Found 0000:18:00.0 (0x8086 - 0x159b) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:25:36.743 Found 0000:18:00.1 (0x8086 - 0x159b) 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.743 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@403 -- # modinfo irdma 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:25:36.744 Found net devices under 0000:18:00.0: cvl_0_0 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:25:36.744 Found net devices under 0000:18:00.1: cvl_0_1 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # is_hw=yes 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # rdma_device_init 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:36.744 02:06:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@528 -- # allocate_nic_ips 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:25:36.744 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:36.744 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:25:36.744 altname enp24s0f0np0 00:25:36.744 altname ens785f0np0 00:25:36.744 inet 192.168.100.8/24 scope global cvl_0_0 00:25:36.744 valid_lft forever preferred_lft forever 00:25:36.744 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:25:36.744 valid_lft forever preferred_lft forever 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:25:36.744 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:36.744 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:25:36.744 altname enp24s0f1np1 00:25:36.744 altname ens785f1np1 00:25:36.744 inet 192.168.100.9/24 scope global cvl_0_1 00:25:36.744 valid_lft forever preferred_lft forever 00:25:36.744 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:25:36.744 valid_lft forever preferred_lft forever 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # return 0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:36.744 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:25:36.745 192.168.100.9' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:25:36.745 192.168.100.9' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # head -n 1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # tail -n +2 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:25:36.745 192.168.100.9' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # head -n 1 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # nvmfpid=3320690 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # waitforlisten 3320690 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 3320690 ']' 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.745 02:06:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.745 [2024-10-09 02:06:56.293170] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:25:36.745 [2024-10-09 02:06:56.293270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.745 [2024-10-09 02:06:56.424696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.003 [2024-10-09 02:06:56.621606] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.003 [2024-10-09 02:06:56.621661] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.003 [2024-10-09 02:06:56.621674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.003 [2024-10-09 02:06:56.621691] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.003 [2024-10-09 02:06:56.621700] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.003 [2024-10-09 02:06:56.624030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.003 [2024-10-09 02:06:56.624083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.003 [2024-10-09 02:06:56.624157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.003 [2024-10-09 02:06:56.624163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 [2024-10-09 02:06:57.166194] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:25:37.570 [2024-10-09 02:06:57.176048] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:25:37.570 [2024-10-09 02:06:57.176083] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 Malloc0 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.570 [2024-10-09 02:06:57.297971] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.570 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.829 Malloc1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.829 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.087 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.346 Malloc2 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.346 02:06:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.605 Malloc3 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.605 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.863 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.122 Malloc4 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.122 02:06:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.380 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.380 Malloc5 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.381 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:39.639 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:25:39.639 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:39.639 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:39.639 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:25:39.639 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:25:39.896 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:39.897 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:39.897 02:06:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:25:39.897 [global] 00:25:39.897 thread=1 00:25:39.897 invalidate=1 00:25:39.897 rw=read 00:25:39.897 time_based=1 00:25:39.897 runtime=10 00:25:39.897 ioengine=libaio 00:25:39.897 direct=1 00:25:39.897 bs=1048576 00:25:39.897 iodepth=128 00:25:39.897 norandommap=1 00:25:39.897 numjobs=13 00:25:39.897 00:25:39.897 [job0] 00:25:39.897 filename=/dev/nvme0n1 00:25:39.897 [job1] 00:25:39.897 filename=/dev/nvme1n1 00:25:39.897 [job2] 00:25:39.897 filename=/dev/nvme2n1 00:25:39.897 [job3] 00:25:39.897 filename=/dev/nvme3n1 00:25:39.897 [job4] 00:25:39.897 filename=/dev/nvme4n1 00:25:39.897 [job5] 00:25:39.897 filename=/dev/nvme5n1 00:25:39.897 Could not set queue depth (nvme0n1) 00:25:39.897 Could not set queue depth (nvme1n1) 00:25:39.897 Could not set queue depth (nvme2n1) 00:25:39.897 Could not set queue depth (nvme3n1) 00:25:39.897 Could not set queue depth (nvme4n1) 00:25:39.897 Could not set queue depth (nvme5n1) 00:25:40.155 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:40.155 ... 00:25:40.155 fio-3.35 00:25:40.155 Starting 78 threads 00:25:52.359 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321396: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=54, BW=54.4MiB/s (57.0MB/s)(559MiB/10283msec) 00:25:52.359 slat (usec): min=53, max=266595, avg=18137.95, stdev=47928.98 00:25:52.359 clat (msec): min=140, max=3826, avg=2097.26, stdev=749.66 00:25:52.359 lat (msec): min=325, max=3832, avg=2115.39, stdev=751.75 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 363], 5.00th=[ 592], 10.00th=[ 1036], 20.00th=[ 1737], 00:25:52.359 | 30.00th=[ 1821], 40.00th=[ 1888], 50.00th=[ 2089], 60.00th=[ 2232], 00:25:52.359 | 70.00th=[ 2333], 80.00th=[ 2567], 90.00th=[ 3171], 95.00th=[ 3540], 00:25:52.359 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:25:52.359 | 99.99th=[ 3842] 00:25:52.359 bw ( KiB/s): min= 8192, max=88064, per=1.41%, avg=58845.87, stdev=19696.53, samples=15 00:25:52.359 iops : min= 8, max= 86, avg=57.47, stdev=19.23, samples=15 00:25:52.359 lat (msec) : 250=0.18%, 500=2.68%, 750=2.68%, 1000=2.86%, 2000=37.57% 00:25:52.359 lat (msec) : >=2000=54.03% 00:25:52.359 cpu : usr=0.01%, sys=1.78%, ctx=604, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321397: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=68, BW=68.2MiB/s (71.5MB/s)(694MiB/10171msec) 00:25:52.359 slat (usec): min=34, max=277050, avg=14485.30, stdev=37355.01 00:25:52.359 clat (msec): min=115, max=2946, avg=1658.78, stdev=625.37 00:25:52.359 lat (msec): min=212, max=2948, avg=1673.27, stdev=627.69 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 224], 5.00th=[ 567], 10.00th=[ 860], 20.00th=[ 1150], 00:25:52.359 | 30.00th=[ 1318], 40.00th=[ 1502], 50.00th=[ 1620], 60.00th=[ 1720], 00:25:52.359 | 70.00th=[ 1905], 80.00th=[ 2106], 90.00th=[ 2668], 95.00th=[ 2802], 00:25:52.359 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:25:52.359 | 99.99th=[ 2937] 00:25:52.359 bw ( KiB/s): min=32702, max=126976, per=1.85%, avg=77262.80, stdev=29405.25, samples=15 00:25:52.359 iops : min= 31, max= 124, avg=75.33, stdev=28.81, samples=15 00:25:52.359 lat (msec) : 250=1.44%, 500=2.16%, 750=4.32%, 1000=3.17%, 2000=64.12% 00:25:52.359 lat (msec) : >=2000=24.78% 00:25:52.359 cpu : usr=0.04%, sys=1.33%, ctx=784, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321398: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=34, BW=34.0MiB/s (35.7MB/s)(347MiB/10205msec) 00:25:52.359 slat (usec): min=645, max=192454, avg=28931.93, stdev=40982.94 00:25:52.359 clat (msec): min=162, max=4614, avg=3044.75, stdev=1046.35 00:25:52.359 lat (msec): min=231, max=4655, avg=3073.68, stdev=1044.28 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 257], 5.00th=[ 709], 10.00th=[ 1200], 20.00th=[ 2333], 00:25:52.359 | 30.00th=[ 2970], 40.00th=[ 3037], 50.00th=[ 3272], 60.00th=[ 3473], 00:25:52.359 | 70.00th=[ 3641], 80.00th=[ 3842], 90.00th=[ 4077], 95.00th=[ 4396], 00:25:52.359 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:52.359 | 99.99th=[ 4597] 00:25:52.359 bw ( KiB/s): min=24576, max=53248, per=0.90%, avg=37371.50, stdev=9695.41, samples=12 00:25:52.359 iops : min= 24, max= 52, avg=36.42, stdev= 9.57, samples=12 00:25:52.359 lat (msec) : 250=0.86%, 500=3.46%, 750=2.02%, 1000=2.02%, 2000=8.65% 00:25:52.359 lat (msec) : >=2000=83.00% 00:25:52.359 cpu : usr=0.02%, sys=1.45%, ctx=737, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.8% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:52.359 issued rwts: total=347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321399: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(658MiB/10298msec) 00:25:52.359 slat (usec): min=38, max=256228, avg=15403.19, stdev=42655.37 00:25:52.359 clat (msec): min=159, max=2426, avg=1893.99, stdev=455.21 00:25:52.359 lat (msec): min=322, max=2491, avg=1909.40, stdev=455.33 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 384], 5.00th=[ 625], 10.00th=[ 1234], 20.00th=[ 1754], 00:25:52.359 | 30.00th=[ 1938], 40.00th=[ 1989], 50.00th=[ 2039], 60.00th=[ 2089], 00:25:52.359 | 70.00th=[ 2123], 80.00th=[ 2165], 90.00th=[ 2265], 95.00th=[ 2333], 00:25:52.359 | 99.00th=[ 2400], 99.50th=[ 2400], 99.90th=[ 2433], 99.95th=[ 2433], 00:25:52.359 | 99.99th=[ 2433] 00:25:52.359 bw ( KiB/s): min= 8192, max=86016, per=1.44%, avg=60286.94, stdev=17072.37, samples=18 00:25:52.359 iops : min= 8, max= 84, avg=58.78, stdev=16.63, samples=18 00:25:52.359 lat (msec) : 250=0.15%, 500=2.43%, 750=2.43%, 1000=2.74%, 2000=36.02% 00:25:52.359 lat (msec) : >=2000=56.23% 00:25:52.359 cpu : usr=0.02%, sys=1.67%, ctx=638, majf=0, minf=32770 00:25:52.359 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321400: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=59, BW=59.0MiB/s (61.9MB/s)(597MiB/10113msec) 00:25:52.359 slat (usec): min=33, max=188648, avg=16814.31, stdev=35606.79 00:25:52.359 clat (msec): min=70, max=2806, avg=1812.41, stdev=639.51 00:25:52.359 lat (msec): min=212, max=2928, avg=1829.22, stdev=642.70 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 230], 5.00th=[ 418], 10.00th=[ 726], 20.00th=[ 1301], 00:25:52.359 | 30.00th=[ 1620], 40.00th=[ 1821], 50.00th=[ 2039], 60.00th=[ 2165], 00:25:52.359 | 70.00th=[ 2232], 80.00th=[ 2299], 90.00th=[ 2400], 95.00th=[ 2702], 00:25:52.359 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:25:52.359 | 99.99th=[ 2802] 00:25:52.359 bw ( KiB/s): min=14336, max=96256, per=1.53%, avg=64040.13, stdev=24344.02, samples=15 00:25:52.359 iops : min= 14, max= 94, avg=62.53, stdev=23.78, samples=15 00:25:52.359 lat (msec) : 100=0.17%, 250=2.51%, 500=2.51%, 750=5.19%, 1000=3.18% 00:25:52.359 lat (msec) : 2000=34.84%, >=2000=51.59% 00:25:52.359 cpu : usr=0.04%, sys=1.62%, ctx=662, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321401: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=47, BW=48.0MiB/s (50.3MB/s)(490MiB/10209msec) 00:25:52.359 slat (usec): min=35, max=618101, avg=20595.28, stdev=50522.92 00:25:52.359 clat (msec): min=114, max=3573, avg=2241.26, stdev=816.81 00:25:52.359 lat (msec): min=212, max=3621, avg=2261.86, stdev=818.24 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 224], 5.00th=[ 518], 10.00th=[ 986], 20.00th=[ 1603], 00:25:52.359 | 30.00th=[ 1921], 40.00th=[ 2198], 50.00th=[ 2467], 60.00th=[ 2567], 00:25:52.359 | 70.00th=[ 2668], 80.00th=[ 2802], 90.00th=[ 3306], 95.00th=[ 3440], 00:25:52.359 | 99.00th=[ 3507], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:52.359 | 99.99th=[ 3574] 00:25:52.359 bw ( KiB/s): min=24625, max=94208, per=1.27%, avg=52963.00, stdev=21705.06, samples=14 00:25:52.359 iops : min= 24, max= 92, avg=51.57, stdev=21.30, samples=14 00:25:52.359 lat (msec) : 250=1.22%, 500=2.45%, 750=3.27%, 1000=4.49%, 2000=22.86% 00:25:52.359 lat (msec) : >=2000=65.71% 00:25:52.359 cpu : usr=0.03%, sys=1.27%, ctx=646, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321402: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=45, BW=45.2MiB/s (47.4MB/s)(460MiB/10175msec) 00:25:52.359 slat (usec): min=40, max=265704, avg=21760.01, stdev=48995.32 00:25:52.359 clat (msec): min=162, max=3619, avg=2430.17, stdev=701.90 00:25:52.359 lat (msec): min=231, max=3620, avg=2451.93, stdev=700.10 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 247], 5.00th=[ 726], 10.00th=[ 1418], 20.00th=[ 2140], 00:25:52.359 | 30.00th=[ 2232], 40.00th=[ 2366], 50.00th=[ 2500], 60.00th=[ 2702], 00:25:52.359 | 70.00th=[ 2903], 80.00th=[ 2970], 90.00th=[ 3138], 95.00th=[ 3239], 00:25:52.359 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:25:52.359 | 99.99th=[ 3608] 00:25:52.359 bw ( KiB/s): min= 6144, max=71536, per=1.09%, avg=45444.27, stdev=17538.53, samples=15 00:25:52.359 iops : min= 6, max= 69, avg=44.20, stdev=17.05, samples=15 00:25:52.359 lat (msec) : 250=1.09%, 500=2.17%, 750=1.96%, 1000=1.96%, 2000=7.83% 00:25:52.359 lat (msec) : >=2000=85.00% 00:25:52.359 cpu : usr=0.02%, sys=1.44%, ctx=613, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321403: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=43, BW=43.8MiB/s (45.9MB/s)(447MiB/10203msec) 00:25:52.359 slat (usec): min=39, max=292570, avg=22613.67, stdev=52144.66 00:25:52.359 clat (msec): min=92, max=4588, avg=2530.23, stdev=1163.52 00:25:52.359 lat (msec): min=282, max=4708, avg=2552.84, stdev=1165.26 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 309], 5.00th=[ 1011], 10.00th=[ 1267], 20.00th=[ 1334], 00:25:52.359 | 30.00th=[ 1603], 40.00th=[ 1871], 50.00th=[ 2198], 60.00th=[ 2970], 00:25:52.359 | 70.00th=[ 3507], 80.00th=[ 3910], 90.00th=[ 4144], 95.00th=[ 4245], 00:25:52.359 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:52.359 | 99.99th=[ 4597] 00:25:52.359 bw ( KiB/s): min=16384, max=96256, per=0.98%, avg=40821.44, stdev=24586.66, samples=16 00:25:52.359 iops : min= 16, max= 94, avg=39.69, stdev=24.11, samples=16 00:25:52.359 lat (msec) : 100=0.22%, 500=0.89%, 750=1.79%, 1000=2.01%, 2000=40.72% 00:25:52.359 lat (msec) : >=2000=54.36% 00:25:52.359 cpu : usr=0.01%, sys=1.22%, ctx=676, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321404: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=50, BW=50.9MiB/s (53.4MB/s)(521MiB/10230msec) 00:25:52.359 slat (usec): min=37, max=303088, avg=19216.84, stdev=48694.58 00:25:52.359 clat (msec): min=215, max=4180, avg=2340.75, stdev=888.26 00:25:52.359 lat (msec): min=231, max=4180, avg=2359.97, stdev=887.88 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 464], 5.00th=[ 1284], 10.00th=[ 1368], 20.00th=[ 1653], 00:25:52.359 | 30.00th=[ 1888], 40.00th=[ 2039], 50.00th=[ 2165], 60.00th=[ 2232], 00:25:52.359 | 70.00th=[ 2534], 80.00th=[ 3171], 90.00th=[ 3910], 95.00th=[ 4010], 00:25:52.359 | 99.00th=[ 4111], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:25:52.359 | 99.99th=[ 4178] 00:25:52.359 bw ( KiB/s): min=16384, max=98304, per=1.13%, avg=47337.47, stdev=27765.28, samples=17 00:25:52.359 iops : min= 16, max= 96, avg=46.18, stdev=27.08, samples=17 00:25:52.359 lat (msec) : 250=0.77%, 500=0.96%, 750=1.34%, 1000=1.15%, 2000=31.09% 00:25:52.359 lat (msec) : >=2000=64.68% 00:25:52.359 cpu : usr=0.02%, sys=1.55%, ctx=692, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321405: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=47, BW=47.8MiB/s (50.1MB/s)(490MiB/10254msec) 00:25:52.359 slat (usec): min=38, max=205959, avg=20531.20, stdev=37086.96 00:25:52.359 clat (msec): min=189, max=4092, avg=2364.63, stdev=915.80 00:25:52.359 lat (msec): min=379, max=4095, avg=2385.16, stdev=916.69 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 384], 5.00th=[ 617], 10.00th=[ 1083], 20.00th=[ 1720], 00:25:52.359 | 30.00th=[ 1972], 40.00th=[ 2165], 50.00th=[ 2366], 60.00th=[ 2601], 00:25:52.359 | 70.00th=[ 2937], 80.00th=[ 3071], 90.00th=[ 3574], 95.00th=[ 3943], 00:25:52.359 | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 4077], 99.95th=[ 4077], 00:25:52.359 | 99.99th=[ 4077] 00:25:52.359 bw ( KiB/s): min=16384, max=104239, per=1.27%, avg=52929.36, stdev=26452.25, samples=14 00:25:52.359 iops : min= 16, max= 101, avg=51.57, stdev=25.65, samples=14 00:25:52.359 lat (msec) : 250=0.20%, 500=3.06%, 750=3.06%, 1000=3.27%, 2000=22.65% 00:25:52.359 lat (msec) : >=2000=67.76% 00:25:52.359 cpu : usr=0.01%, sys=1.90%, ctx=772, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321406: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=43, BW=43.8MiB/s (46.0MB/s)(449MiB/10242msec) 00:25:52.359 slat (usec): min=55, max=220514, avg=22600.39, stdev=49483.55 00:25:52.359 clat (msec): min=92, max=4365, avg=2598.75, stdev=1114.70 00:25:52.359 lat (msec): min=281, max=4373, avg=2621.35, stdev=1116.68 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 300], 5.00th=[ 768], 10.00th=[ 1502], 20.00th=[ 1720], 00:25:52.359 | 30.00th=[ 1787], 40.00th=[ 1955], 50.00th=[ 2232], 60.00th=[ 2903], 00:25:52.359 | 70.00th=[ 3507], 80.00th=[ 3910], 90.00th=[ 4144], 95.00th=[ 4245], 00:25:52.359 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:25:52.359 | 99.99th=[ 4396] 00:25:52.359 bw ( KiB/s): min= 4096, max=88064, per=0.93%, avg=38652.18, stdev=22600.32, samples=17 00:25:52.359 iops : min= 4, max= 86, avg=37.53, stdev=22.07, samples=17 00:25:52.359 lat (msec) : 100=0.22%, 500=1.78%, 750=2.45%, 1000=1.78%, 2000=36.75% 00:25:52.359 lat (msec) : >=2000=57.02% 00:25:52.359 cpu : usr=0.04%, sys=1.27%, ctx=658, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=86.0% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.359 issued rwts: total=449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321407: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=68, BW=68.0MiB/s (71.4MB/s)(693MiB/10184msec) 00:25:52.359 slat (usec): min=34, max=194096, avg=14451.42, stdev=31652.02 00:25:52.359 clat (msec): min=165, max=3385, avg=1734.58, stdev=687.07 00:25:52.359 lat (msec): min=210, max=3393, avg=1749.03, stdev=688.60 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 232], 5.00th=[ 667], 10.00th=[ 1099], 20.00th=[ 1250], 00:25:52.359 | 30.00th=[ 1334], 40.00th=[ 1519], 50.00th=[ 1670], 60.00th=[ 1804], 00:25:52.359 | 70.00th=[ 1871], 80.00th=[ 1972], 90.00th=[ 2869], 95.00th=[ 3306], 00:25:52.359 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:25:52.359 | 99.99th=[ 3373] 00:25:52.359 bw ( KiB/s): min=16351, max=126976, per=1.73%, avg=72297.19, stdev=28436.61, samples=16 00:25:52.359 iops : min= 15, max= 124, avg=70.44, stdev=27.86, samples=16 00:25:52.359 lat (msec) : 250=1.30%, 500=2.31%, 750=2.16%, 1000=2.60%, 2000=72.15% 00:25:52.359 lat (msec) : >=2000=19.48% 00:25:52.359 cpu : usr=0.03%, sys=1.83%, ctx=717, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job0: (groupid=0, jobs=1): err= 0: pid=3321408: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=52, BW=52.4MiB/s (55.0MB/s)(535MiB/10206msec) 00:25:52.359 slat (usec): min=34, max=268464, avg=18885.42, stdev=46129.93 00:25:52.359 clat (msec): min=99, max=3272, avg=2202.23, stdev=763.78 00:25:52.359 lat (msec): min=258, max=3275, avg=2221.11, stdev=766.04 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 264], 5.00th=[ 451], 10.00th=[ 902], 20.00th=[ 1485], 00:25:52.359 | 30.00th=[ 2123], 40.00th=[ 2333], 50.00th=[ 2500], 60.00th=[ 2635], 00:25:52.359 | 70.00th=[ 2702], 80.00th=[ 2769], 90.00th=[ 2903], 95.00th=[ 2970], 00:25:52.359 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3272], 99.95th=[ 3272], 00:25:52.359 | 99.99th=[ 3272] 00:25:52.359 bw ( KiB/s): min= 2048, max=88064, per=1.17%, avg=49015.82, stdev=22384.27, samples=17 00:25:52.359 iops : min= 2, max= 86, avg=47.71, stdev=21.90, samples=17 00:25:52.359 lat (msec) : 100=0.19%, 500=5.42%, 750=2.24%, 1000=3.36%, 2000=15.70% 00:25:52.359 lat (msec) : >=2000=73.08% 00:25:52.359 cpu : usr=0.01%, sys=1.37%, ctx=691, majf=0, minf=32769 00:25:52.359 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:52.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.359 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.359 job1: (groupid=0, jobs=1): err= 0: pid=3321410: Wed Oct 9 02:07:10 2024 00:25:52.359 read: IOPS=60, BW=60.9MiB/s (63.9MB/s)(623MiB/10223msec) 00:25:52.359 slat (usec): min=40, max=218141, avg=16048.75, stdev=38782.97 00:25:52.359 clat (msec): min=220, max=3046, avg=1919.93, stdev=515.66 00:25:52.359 lat (msec): min=222, max=3047, avg=1935.98, stdev=514.25 00:25:52.359 clat percentiles (msec): 00:25:52.359 | 1.00th=[ 334], 5.00th=[ 1116], 10.00th=[ 1167], 20.00th=[ 1536], 00:25:52.359 | 30.00th=[ 1720], 40.00th=[ 1888], 50.00th=[ 2005], 60.00th=[ 2106], 00:25:52.360 | 70.00th=[ 2165], 80.00th=[ 2333], 90.00th=[ 2433], 95.00th=[ 2735], 00:25:52.360 | 99.00th=[ 2903], 99.50th=[ 2970], 99.90th=[ 3037], 99.95th=[ 3037], 00:25:52.360 | 99.99th=[ 3037] 00:25:52.360 bw ( KiB/s): min= 6144, max=114688, per=1.43%, avg=59748.82, stdev=31039.39, samples=17 00:25:52.360 iops : min= 6, max= 112, avg=58.29, stdev=30.35, samples=17 00:25:52.360 lat (msec) : 250=0.48%, 500=1.44%, 750=1.28%, 1000=1.28%, 2000=43.98% 00:25:52.360 lat (msec) : >=2000=51.52% 00:25:52.360 cpu : usr=0.05%, sys=1.51%, ctx=802, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321411: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=78, BW=78.3MiB/s (82.1MB/s)(801MiB/10228msec) 00:25:52.360 slat (usec): min=45, max=182069, avg=12558.01, stdev=31707.28 00:25:52.360 clat (msec): min=163, max=2566, avg=1500.62, stdev=380.09 00:25:52.360 lat (msec): min=257, max=2567, avg=1513.17, stdev=378.88 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 409], 5.00th=[ 1083], 10.00th=[ 1116], 20.00th=[ 1234], 00:25:52.360 | 30.00th=[ 1284], 40.00th=[ 1334], 50.00th=[ 1418], 60.00th=[ 1536], 00:25:52.360 | 70.00th=[ 1653], 80.00th=[ 1804], 90.00th=[ 2039], 95.00th=[ 2232], 00:25:52.360 | 99.00th=[ 2433], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:25:52.360 | 99.99th=[ 2567] 00:25:52.360 bw ( KiB/s): min=20480, max=126976, per=1.94%, avg=81040.47, stdev=32581.81, samples=17 00:25:52.360 iops : min= 20, max= 124, avg=79.00, stdev=31.72, samples=17 00:25:52.360 lat (msec) : 250=0.12%, 500=1.37%, 750=1.00%, 1000=0.62%, 2000=85.52% 00:25:52.360 lat (msec) : >=2000=11.36% 00:25:52.360 cpu : usr=0.10%, sys=1.78%, ctx=945, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:52.360 issued rwts: total=801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321412: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(654MiB/10233msec) 00:25:52.360 slat (usec): min=39, max=291461, avg=15404.39, stdev=40848.78 00:25:52.360 clat (msec): min=154, max=3320, avg=1818.52, stdev=696.90 00:25:52.360 lat (msec): min=303, max=3324, avg=1833.93, stdev=698.40 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 326], 5.00th=[ 625], 10.00th=[ 927], 20.00th=[ 1234], 00:25:52.360 | 30.00th=[ 1418], 40.00th=[ 1620], 50.00th=[ 1821], 60.00th=[ 1989], 00:25:52.360 | 70.00th=[ 2232], 80.00th=[ 2433], 90.00th=[ 2702], 95.00th=[ 2970], 00:25:52.360 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3306], 99.95th=[ 3306], 00:25:52.360 | 99.99th=[ 3306] 00:25:52.360 bw ( KiB/s): min=24576, max=126976, per=1.52%, avg=63359.59, stdev=29164.56, samples=17 00:25:52.360 iops : min= 24, max= 124, avg=61.82, stdev=28.47, samples=17 00:25:52.360 lat (msec) : 250=0.15%, 500=4.59%, 750=2.45%, 1000=4.74%, 2000=48.62% 00:25:52.360 lat (msec) : >=2000=39.45% 00:25:52.360 cpu : usr=0.03%, sys=1.63%, ctx=701, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321413: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=54, BW=54.8MiB/s (57.5MB/s)(558MiB/10184msec) 00:25:52.360 slat (usec): min=39, max=210997, avg=17946.91, stdev=34328.63 00:25:52.360 clat (msec): min=166, max=3514, avg=2118.34, stdev=884.38 00:25:52.360 lat (msec): min=208, max=3557, avg=2136.29, stdev=888.96 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 305], 5.00th=[ 477], 10.00th=[ 768], 20.00th=[ 1284], 00:25:52.360 | 30.00th=[ 1536], 40.00th=[ 2005], 50.00th=[ 2265], 60.00th=[ 2500], 00:25:52.360 | 70.00th=[ 2769], 80.00th=[ 3037], 90.00th=[ 3171], 95.00th=[ 3272], 00:25:52.360 | 99.00th=[ 3406], 99.50th=[ 3440], 99.90th=[ 3507], 99.95th=[ 3507], 00:25:52.360 | 99.99th=[ 3507] 00:25:52.360 bw ( KiB/s): min=18432, max=96256, per=1.32%, avg=55030.00, stdev=22796.69, samples=16 00:25:52.360 iops : min= 18, max= 94, avg=53.62, stdev=22.26, samples=16 00:25:52.360 lat (msec) : 250=0.36%, 500=5.20%, 750=3.58%, 1000=4.84%, 2000=25.99% 00:25:52.360 lat (msec) : >=2000=60.04% 00:25:52.360 cpu : usr=0.03%, sys=1.65%, ctx=708, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321414: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=47, BW=47.8MiB/s (50.1MB/s)(489MiB/10231msec) 00:25:52.360 slat (usec): min=37, max=211473, avg=20463.51, stdev=39039.91 00:25:52.360 clat (msec): min=220, max=3669, avg=2470.05, stdev=743.75 00:25:52.360 lat (msec): min=238, max=3673, avg=2490.51, stdev=744.27 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 305], 5.00th=[ 953], 10.00th=[ 1653], 20.00th=[ 1989], 00:25:52.360 | 30.00th=[ 2140], 40.00th=[ 2232], 50.00th=[ 2400], 60.00th=[ 2668], 00:25:52.360 | 70.00th=[ 3037], 80.00th=[ 3239], 90.00th=[ 3406], 95.00th=[ 3473], 00:25:52.360 | 99.00th=[ 3641], 99.50th=[ 3641], 99.90th=[ 3675], 99.95th=[ 3675], 00:25:52.360 | 99.99th=[ 3675] 00:25:52.360 bw ( KiB/s): min=10240, max=92160, per=1.04%, avg=43606.94, stdev=19102.08, samples=17 00:25:52.360 iops : min= 10, max= 90, avg=42.53, stdev=18.70, samples=17 00:25:52.360 lat (msec) : 250=0.41%, 500=1.64%, 750=1.64%, 1000=1.64%, 2000=15.75% 00:25:52.360 lat (msec) : >=2000=78.94% 00:25:52.360 cpu : usr=0.06%, sys=1.64%, ctx=679, majf=0, minf=32487 00:25:52.360 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.360 issued rwts: total=489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321415: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=56, BW=57.0MiB/s (59.8MB/s)(577MiB/10124msec) 00:25:52.360 slat (usec): min=58, max=255612, avg=17328.67, stdev=35369.35 00:25:52.360 clat (msec): min=121, max=3587, avg=1786.31, stdev=642.60 00:25:52.360 lat (msec): min=147, max=3608, avg=1803.64, stdev=647.94 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 220], 5.00th=[ 514], 10.00th=[ 953], 20.00th=[ 1552], 00:25:52.360 | 30.00th=[ 1636], 40.00th=[ 1703], 50.00th=[ 1770], 60.00th=[ 1787], 00:25:52.360 | 70.00th=[ 1838], 80.00th=[ 2106], 90.00th=[ 2668], 95.00th=[ 3138], 00:25:52.360 | 99.00th=[ 3473], 99.50th=[ 3473], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:52.360 | 99.99th=[ 3574] 00:25:52.360 bw ( KiB/s): min=45146, max=94208, per=1.70%, avg=70899.23, stdev=14625.29, samples=13 00:25:52.360 iops : min= 44, max= 92, avg=69.23, stdev=14.30, samples=13 00:25:52.360 lat (msec) : 250=1.04%, 500=3.29%, 750=3.81%, 1000=3.12%, 2000=67.59% 00:25:52.360 lat (msec) : >=2000=21.14% 00:25:52.360 cpu : usr=0.02%, sys=1.31%, ctx=729, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321416: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=46, BW=46.3MiB/s (48.5MB/s)(472MiB/10205msec) 00:25:52.360 slat (usec): min=39, max=229401, avg=21364.12, stdev=37472.55 00:25:52.360 clat (msec): min=118, max=3399, avg=2499.34, stdev=747.80 00:25:52.360 lat (msec): min=213, max=3468, avg=2520.71, stdev=747.17 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 228], 5.00th=[ 676], 10.00th=[ 1401], 20.00th=[ 2165], 00:25:52.360 | 30.00th=[ 2265], 40.00th=[ 2467], 50.00th=[ 2601], 60.00th=[ 2802], 00:25:52.360 | 70.00th=[ 3037], 80.00th=[ 3171], 90.00th=[ 3239], 95.00th=[ 3272], 00:25:52.360 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3406], 00:25:52.360 | 99.99th=[ 3406] 00:25:52.360 bw ( KiB/s): min=16384, max=63488, per=1.05%, avg=44010.06, stdev=13581.13, samples=16 00:25:52.360 iops : min= 16, max= 62, avg=42.75, stdev=13.28, samples=16 00:25:52.360 lat (msec) : 250=1.69%, 500=1.91%, 750=2.75%, 1000=1.06%, 2000=8.47% 00:25:52.360 lat (msec) : >=2000=84.11% 00:25:52.360 cpu : usr=0.06%, sys=1.33%, ctx=613, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.360 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321417: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=52, BW=52.1MiB/s (54.6MB/s)(533MiB/10238msec) 00:25:52.360 slat (usec): min=45, max=188822, avg=18891.69, stdev=35409.95 00:25:52.360 clat (msec): min=163, max=4180, avg=2174.33, stdev=822.49 00:25:52.360 lat (msec): min=254, max=4182, avg=2193.22, stdev=822.14 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 284], 5.00th=[ 1116], 10.00th=[ 1620], 20.00th=[ 1687], 00:25:52.360 | 30.00th=[ 1787], 40.00th=[ 1804], 50.00th=[ 1854], 60.00th=[ 1921], 00:25:52.360 | 70.00th=[ 2232], 80.00th=[ 2970], 90.00th=[ 3608], 95.00th=[ 3910], 00:25:52.360 | 99.00th=[ 4111], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:25:52.360 | 99.99th=[ 4178] 00:25:52.360 bw ( KiB/s): min=12288, max=81756, per=1.17%, avg=48760.41, stdev=22173.52, samples=17 00:25:52.360 iops : min= 12, max= 79, avg=47.41, stdev=21.51, samples=17 00:25:52.360 lat (msec) : 250=0.19%, 500=1.88%, 750=1.31%, 1000=1.13%, 2000=60.60% 00:25:52.360 lat (msec) : >=2000=34.90% 00:25:52.360 cpu : usr=0.02%, sys=1.63%, ctx=609, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321418: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=72, BW=72.3MiB/s (75.8MB/s)(730MiB/10100msec) 00:25:52.360 slat (usec): min=39, max=231219, avg=13809.49, stdev=32832.13 00:25:52.360 clat (msec): min=14, max=3275, avg=1544.59, stdev=663.08 00:25:52.360 lat (msec): min=143, max=3275, avg=1558.40, stdev=664.06 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 150], 5.00th=[ 502], 10.00th=[ 1116], 20.00th=[ 1167], 00:25:52.360 | 30.00th=[ 1217], 40.00th=[ 1267], 50.00th=[ 1318], 60.00th=[ 1368], 00:25:52.360 | 70.00th=[ 1636], 80.00th=[ 1921], 90.00th=[ 2702], 95.00th=[ 3071], 00:25:52.360 | 99.00th=[ 3239], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:25:52.360 | 99.99th=[ 3272] 00:25:52.360 bw ( KiB/s): min=20480, max=124928, per=1.85%, avg=77056.00, stdev=34569.16, samples=16 00:25:52.360 iops : min= 20, max= 122, avg=75.25, stdev=33.76, samples=16 00:25:52.360 lat (msec) : 20=0.14%, 250=1.92%, 500=1.10%, 750=2.60%, 1000=0.68% 00:25:52.360 lat (msec) : 2000=74.93%, >=2000=18.63% 00:25:52.360 cpu : usr=0.02%, sys=1.65%, ctx=700, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321419: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=66, BW=66.3MiB/s (69.5MB/s)(674MiB/10163msec) 00:25:52.360 slat (usec): min=40, max=223724, avg=15048.34, stdev=40421.99 00:25:52.360 clat (msec): min=16, max=3103, avg=1761.30, stdev=600.62 00:25:52.360 lat (msec): min=170, max=3153, avg=1776.35, stdev=601.30 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 176], 5.00th=[ 634], 10.00th=[ 1116], 20.00th=[ 1519], 00:25:52.360 | 30.00th=[ 1586], 40.00th=[ 1636], 50.00th=[ 1720], 60.00th=[ 1787], 00:25:52.360 | 70.00th=[ 1854], 80.00th=[ 2140], 90.00th=[ 2735], 95.00th=[ 2869], 00:25:52.360 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:52.360 | 99.99th=[ 3104] 00:25:52.360 bw ( KiB/s): min=26624, max=122880, per=1.58%, avg=65757.65, stdev=28910.78, samples=17 00:25:52.360 iops : min= 26, max= 120, avg=64.12, stdev=28.18, samples=17 00:25:52.360 lat (msec) : 20=0.15%, 250=2.37%, 500=2.37%, 750=2.23%, 1000=2.23% 00:25:52.360 lat (msec) : 2000=68.55%, >=2000=22.11% 00:25:52.360 cpu : usr=0.05%, sys=1.48%, ctx=625, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321420: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=66, BW=66.7MiB/s (69.9MB/s)(677MiB/10152msec) 00:25:52.360 slat (usec): min=42, max=223383, avg=14950.59, stdev=40459.80 00:25:52.360 clat (msec): min=25, max=2381, avg=1752.71, stdev=483.99 00:25:52.360 lat (msec): min=169, max=2396, avg=1767.66, stdev=485.05 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 171], 5.00th=[ 600], 10.00th=[ 1036], 20.00th=[ 1603], 00:25:52.360 | 30.00th=[ 1687], 40.00th=[ 1821], 50.00th=[ 1888], 60.00th=[ 1905], 00:25:52.360 | 70.00th=[ 2022], 80.00th=[ 2106], 90.00th=[ 2198], 95.00th=[ 2232], 00:25:52.360 | 99.00th=[ 2333], 99.50th=[ 2333], 99.90th=[ 2366], 99.95th=[ 2366], 00:25:52.360 | 99.99th=[ 2366] 00:25:52.360 bw ( KiB/s): min=24576, max=96256, per=1.58%, avg=66120.53, stdev=17196.44, samples=17 00:25:52.360 iops : min= 24, max= 94, avg=64.47, stdev=16.74, samples=17 00:25:52.360 lat (msec) : 50=0.15%, 250=2.51%, 500=1.77%, 750=2.51%, 1000=2.22% 00:25:52.360 lat (msec) : 2000=59.53%, >=2000=31.31% 00:25:52.360 cpu : usr=0.07%, sys=1.81%, ctx=556, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321421: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=37, BW=37.0MiB/s (38.8MB/s)(377MiB/10182msec) 00:25:52.360 slat (usec): min=46, max=203822, avg=26687.83, stdev=42166.98 00:25:52.360 clat (msec): min=118, max=3846, avg=2873.67, stdev=876.61 00:25:52.360 lat (msec): min=213, max=3851, avg=2900.36, stdev=873.88 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 228], 5.00th=[ 911], 10.00th=[ 1234], 20.00th=[ 2333], 00:25:52.360 | 30.00th=[ 2836], 40.00th=[ 3071], 50.00th=[ 3171], 60.00th=[ 3272], 00:25:52.360 | 70.00th=[ 3406], 80.00th=[ 3507], 90.00th=[ 3675], 95.00th=[ 3775], 00:25:52.360 | 99.00th=[ 3842], 99.50th=[ 3842], 99.90th=[ 3842], 99.95th=[ 3842], 00:25:52.360 | 99.99th=[ 3842] 00:25:52.360 bw ( KiB/s): min=18432, max=61440, per=0.87%, avg=36415.14, stdev=14080.06, samples=14 00:25:52.360 iops : min= 18, max= 60, avg=35.43, stdev=13.77, samples=14 00:25:52.360 lat (msec) : 250=1.59%, 500=1.33%, 750=0.53%, 1000=2.92%, 2000=9.81% 00:25:52.360 lat (msec) : >=2000=83.82% 00:25:52.360 cpu : usr=0.03%, sys=1.25%, ctx=586, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.360 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job1: (groupid=0, jobs=1): err= 0: pid=3321422: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=53, BW=53.7MiB/s (56.3MB/s)(546MiB/10172msec) 00:25:52.360 slat (usec): min=33, max=219335, avg=18352.26, stdev=46345.14 00:25:52.360 clat (msec): min=148, max=3149, avg=1974.35, stdev=649.32 00:25:52.360 lat (msec): min=230, max=3233, avg=1992.70, stdev=652.23 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 443], 5.00th=[ 676], 10.00th=[ 1133], 20.00th=[ 1569], 00:25:52.360 | 30.00th=[ 1670], 40.00th=[ 1838], 50.00th=[ 1972], 60.00th=[ 2123], 00:25:52.360 | 70.00th=[ 2232], 80.00th=[ 2601], 90.00th=[ 2937], 95.00th=[ 3004], 00:25:52.360 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:25:52.360 | 99.99th=[ 3138] 00:25:52.360 bw ( KiB/s): min=22528, max=96256, per=1.46%, avg=61129.29, stdev=24898.40, samples=14 00:25:52.360 iops : min= 22, max= 94, avg=59.57, stdev=24.30, samples=14 00:25:52.360 lat (msec) : 250=0.73%, 500=2.75%, 750=2.75%, 1000=2.75%, 2000=44.51% 00:25:52.360 lat (msec) : >=2000=46.52% 00:25:52.360 cpu : usr=0.04%, sys=1.33%, ctx=727, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job2: (groupid=0, jobs=1): err= 0: pid=3321423: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=43, BW=43.1MiB/s (45.2MB/s)(442MiB/10263msec) 00:25:52.360 slat (usec): min=33, max=254045, avg=22965.29, stdev=55353.10 00:25:52.360 clat (msec): min=110, max=4569, avg=2758.70, stdev=967.14 00:25:52.360 lat (msec): min=298, max=4643, avg=2781.66, stdev=967.46 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 326], 5.00th=[ 1217], 10.00th=[ 1787], 20.00th=[ 1972], 00:25:52.360 | 30.00th=[ 2123], 40.00th=[ 2333], 50.00th=[ 2702], 60.00th=[ 2937], 00:25:52.360 | 70.00th=[ 3440], 80.00th=[ 3842], 90.00th=[ 4077], 95.00th=[ 4329], 00:25:52.360 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:52.360 | 99.99th=[ 4597] 00:25:52.360 bw ( KiB/s): min=14336, max=81920, per=0.91%, avg=37809.88, stdev=16335.31, samples=17 00:25:52.360 iops : min= 14, max= 80, avg=36.76, stdev=15.83, samples=17 00:25:52.360 lat (msec) : 250=0.23%, 500=1.13%, 750=1.36%, 1000=2.04%, 2000=18.55% 00:25:52.360 lat (msec) : >=2000=76.70% 00:25:52.360 cpu : usr=0.01%, sys=1.37%, ctx=678, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.360 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.360 job2: (groupid=0, jobs=1): err= 0: pid=3321424: Wed Oct 9 02:07:10 2024 00:25:52.360 read: IOPS=66, BW=66.4MiB/s (69.6MB/s)(680MiB/10240msec) 00:25:52.360 slat (usec): min=39, max=227139, avg=14964.75, stdev=45831.15 00:25:52.360 clat (msec): min=60, max=2563, avg=1771.06, stdev=519.00 00:25:52.360 lat (msec): min=241, max=2600, avg=1786.02, stdev=519.78 00:25:52.360 clat percentiles (msec): 00:25:52.360 | 1.00th=[ 243], 5.00th=[ 609], 10.00th=[ 1083], 20.00th=[ 1418], 00:25:52.360 | 30.00th=[ 1603], 40.00th=[ 1770], 50.00th=[ 1804], 60.00th=[ 1938], 00:25:52.360 | 70.00th=[ 2089], 80.00th=[ 2265], 90.00th=[ 2366], 95.00th=[ 2500], 00:25:52.360 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:25:52.360 | 99.99th=[ 2567] 00:25:52.360 bw ( KiB/s): min=10240, max=133120, per=1.59%, avg=66494.35, stdev=29265.58, samples=17 00:25:52.360 iops : min= 10, max= 130, avg=64.76, stdev=28.74, samples=17 00:25:52.360 lat (msec) : 100=0.15%, 250=2.35%, 500=1.03%, 750=2.21%, 1000=2.21% 00:25:52.360 lat (msec) : 2000=56.62%, >=2000=35.44% 00:25:52.360 cpu : usr=0.05%, sys=1.69%, ctx=700, majf=0, minf=32769 00:25:52.360 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:52.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.360 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.360 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321425: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=45, BW=45.5MiB/s (47.8MB/s)(462MiB/10145msec) 00:25:52.361 slat (usec): min=33, max=523067, avg=21837.63, stdev=61128.78 00:25:52.361 clat (msec): min=53, max=3992, avg=2535.39, stdev=893.00 00:25:52.361 lat (msec): min=183, max=3994, avg=2557.23, stdev=894.13 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 188], 5.00th=[ 584], 10.00th=[ 1028], 20.00th=[ 1972], 00:25:52.361 | 30.00th=[ 2299], 40.00th=[ 2534], 50.00th=[ 2735], 60.00th=[ 3004], 00:25:52.361 | 70.00th=[ 3104], 80.00th=[ 3239], 90.00th=[ 3406], 95.00th=[ 3540], 00:25:52.361 | 99.00th=[ 3809], 99.50th=[ 3977], 99.90th=[ 3977], 99.95th=[ 3977], 00:25:52.361 | 99.99th=[ 3977] 00:25:52.361 bw ( KiB/s): min= 6144, max=91976, per=0.96%, avg=40221.29, stdev=20398.06, samples=17 00:25:52.361 iops : min= 6, max= 89, avg=39.18, stdev=19.78, samples=17 00:25:52.361 lat (msec) : 100=0.22%, 250=3.03%, 500=1.73%, 750=1.30%, 1000=1.95% 00:25:52.361 lat (msec) : 2000=13.85%, >=2000=77.92% 00:25:52.361 cpu : usr=0.01%, sys=1.18%, ctx=553, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321426: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=45, BW=45.2MiB/s (47.4MB/s)(459MiB/10153msec) 00:25:52.361 slat (usec): min=52, max=193776, avg=22036.14, stdev=41275.13 00:25:52.361 clat (msec): min=36, max=3553, avg=2398.57, stdev=803.36 00:25:52.361 lat (msec): min=168, max=3554, avg=2420.61, stdev=802.27 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 199], 5.00th=[ 447], 10.00th=[ 936], 20.00th=[ 2106], 00:25:52.361 | 30.00th=[ 2299], 40.00th=[ 2500], 50.00th=[ 2601], 60.00th=[ 2735], 00:25:52.361 | 70.00th=[ 2836], 80.00th=[ 2970], 90.00th=[ 3239], 95.00th=[ 3339], 00:25:52.361 | 99.00th=[ 3373], 99.50th=[ 3507], 99.90th=[ 3540], 99.95th=[ 3540], 00:25:52.361 | 99.99th=[ 3540] 00:25:52.361 bw ( KiB/s): min=28672, max=77824, per=1.08%, avg=45192.53, stdev=13462.31, samples=15 00:25:52.361 iops : min= 28, max= 76, avg=44.13, stdev=13.15, samples=15 00:25:52.361 lat (msec) : 50=0.22%, 250=3.27%, 500=3.05%, 750=2.18%, 1000=1.96% 00:25:52.361 lat (msec) : 2000=7.19%, >=2000=82.14% 00:25:52.361 cpu : usr=0.02%, sys=1.24%, ctx=625, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321427: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=45, BW=45.4MiB/s (47.6MB/s)(463MiB/10194msec) 00:25:52.361 slat (usec): min=41, max=240319, avg=21825.34, stdev=51465.60 00:25:52.361 clat (msec): min=86, max=3792, avg=2365.48, stdev=901.22 00:25:52.361 lat (msec): min=208, max=3793, avg=2387.31, stdev=903.34 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 211], 5.00th=[ 439], 10.00th=[ 743], 20.00th=[ 1754], 00:25:52.361 | 30.00th=[ 2232], 40.00th=[ 2400], 50.00th=[ 2534], 60.00th=[ 2702], 00:25:52.361 | 70.00th=[ 2937], 80.00th=[ 3037], 90.00th=[ 3339], 95.00th=[ 3574], 00:25:52.361 | 99.00th=[ 3775], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:25:52.361 | 99.99th=[ 3809] 00:25:52.361 bw ( KiB/s): min=12288, max=90112, per=1.17%, avg=49010.36, stdev=19498.25, samples=14 00:25:52.361 iops : min= 12, max= 88, avg=47.79, stdev=19.10, samples=14 00:25:52.361 lat (msec) : 100=0.22%, 250=2.81%, 500=3.46%, 750=3.67%, 1000=3.46% 00:25:52.361 lat (msec) : 2000=9.07%, >=2000=77.32% 00:25:52.361 cpu : usr=0.01%, sys=1.20%, ctx=621, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321428: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=44, BW=44.4MiB/s (46.5MB/s)(449MiB/10115msec) 00:25:52.361 slat (usec): min=33, max=223074, avg=22271.96, stdev=56998.55 00:25:52.361 clat (msec): min=110, max=3861, avg=2409.06, stdev=985.87 00:25:52.361 lat (msec): min=175, max=3864, avg=2431.33, stdev=989.75 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 180], 5.00th=[ 527], 10.00th=[ 751], 20.00th=[ 1452], 00:25:52.361 | 30.00th=[ 2106], 40.00th=[ 2433], 50.00th=[ 2534], 60.00th=[ 2702], 00:25:52.361 | 70.00th=[ 3037], 80.00th=[ 3373], 90.00th=[ 3608], 95.00th=[ 3742], 00:25:52.361 | 99.00th=[ 3842], 99.50th=[ 3842], 99.90th=[ 3876], 99.95th=[ 3876], 00:25:52.361 | 99.99th=[ 3876] 00:25:52.361 bw ( KiB/s): min=16384, max=77824, per=1.13%, avg=47104.00, stdev=19676.55, samples=14 00:25:52.361 iops : min= 16, max= 76, avg=46.00, stdev=19.22, samples=14 00:25:52.361 lat (msec) : 250=1.34%, 500=3.34%, 750=4.68%, 1000=4.90%, 2000=11.58% 00:25:52.361 lat (msec) : >=2000=74.16% 00:25:52.361 cpu : usr=0.04%, sys=1.24%, ctx=688, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=86.0% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321429: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=57, BW=57.8MiB/s (60.6MB/s)(589MiB/10185msec) 00:25:52.361 slat (usec): min=37, max=217824, avg=17136.09, stdev=35704.29 00:25:52.361 clat (msec): min=88, max=2584, avg=1930.56, stdev=557.84 00:25:52.361 lat (msec): min=269, max=2586, avg=1947.70, stdev=558.47 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 275], 5.00th=[ 498], 10.00th=[ 936], 20.00th=[ 1720], 00:25:52.361 | 30.00th=[ 1821], 40.00th=[ 1921], 50.00th=[ 2140], 60.00th=[ 2265], 00:25:52.361 | 70.00th=[ 2299], 80.00th=[ 2333], 90.00th=[ 2400], 95.00th=[ 2467], 00:25:52.361 | 99.00th=[ 2534], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:25:52.361 | 99.99th=[ 2601] 00:25:52.361 bw ( KiB/s): min=32702, max=94396, per=1.41%, avg=59001.00, stdev=19817.25, samples=16 00:25:52.361 iops : min= 31, max= 92, avg=57.44, stdev=19.42, samples=16 00:25:52.361 lat (msec) : 100=0.17%, 500=5.09%, 750=2.72%, 1000=2.55%, 2000=33.79% 00:25:52.361 lat (msec) : >=2000=55.69% 00:25:52.361 cpu : usr=0.00%, sys=1.37%, ctx=662, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.361 issued rwts: total=589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321430: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=46, BW=46.1MiB/s (48.3MB/s)(468MiB/10159msec) 00:25:52.361 slat (usec): min=46, max=221483, avg=21515.72, stdev=56482.21 00:25:52.361 clat (msec): min=87, max=3844, avg=2330.58, stdev=937.13 00:25:52.361 lat (msec): min=275, max=3862, avg=2352.09, stdev=939.39 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 288], 5.00th=[ 523], 10.00th=[ 768], 20.00th=[ 1653], 00:25:52.361 | 30.00th=[ 1921], 40.00th=[ 2165], 50.00th=[ 2433], 60.00th=[ 2635], 00:25:52.361 | 70.00th=[ 2769], 80.00th=[ 3104], 90.00th=[ 3641], 95.00th=[ 3641], 00:25:52.361 | 99.00th=[ 3842], 99.50th=[ 3842], 99.90th=[ 3842], 99.95th=[ 3842], 00:25:52.361 | 99.99th=[ 3842] 00:25:52.361 bw ( KiB/s): min= 4096, max=110592, per=1.19%, avg=49742.07, stdev=24168.05, samples=14 00:25:52.361 iops : min= 4, max= 108, avg=48.57, stdev=23.60, samples=14 00:25:52.361 lat (msec) : 100=0.21%, 500=3.21%, 750=4.70%, 1000=3.85%, 2000=22.01% 00:25:52.361 lat (msec) : >=2000=66.03% 00:25:52.361 cpu : usr=0.00%, sys=1.22%, ctx=630, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.5% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321431: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=48, BW=48.7MiB/s (51.0MB/s)(494MiB/10151msec) 00:25:52.361 slat (usec): min=38, max=516551, avg=20320.00, stdev=48812.54 00:25:52.361 clat (msec): min=110, max=3088, avg=2218.90, stdev=647.51 00:25:52.361 lat (msec): min=298, max=3094, avg=2239.22, stdev=643.50 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 321], 5.00th=[ 1351], 10.00th=[ 1435], 20.00th=[ 1569], 00:25:52.361 | 30.00th=[ 1754], 40.00th=[ 2056], 50.00th=[ 2333], 60.00th=[ 2567], 00:25:52.361 | 70.00th=[ 2735], 80.00th=[ 2869], 90.00th=[ 2970], 95.00th=[ 3004], 00:25:52.361 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:52.361 | 99.99th=[ 3104] 00:25:52.361 bw ( KiB/s): min=16384, max=108544, per=1.20%, avg=49982.40, stdev=30427.65, samples=15 00:25:52.361 iops : min= 16, max= 106, avg=48.80, stdev=29.70, samples=15 00:25:52.361 lat (msec) : 250=0.20%, 500=1.21%, 750=1.01%, 1000=1.82%, 2000=33.40% 00:25:52.361 lat (msec) : >=2000=62.35% 00:25:52.361 cpu : usr=0.03%, sys=1.19%, ctx=663, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321432: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(460MiB/10267msec) 00:25:52.361 slat (usec): min=56, max=269300, avg=21863.67, stdev=50547.60 00:25:52.361 clat (msec): min=207, max=3582, avg=2610.65, stdev=709.32 00:25:52.361 lat (msec): min=440, max=3583, avg=2632.51, stdev=706.56 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 443], 5.00th=[ 676], 10.00th=[ 1603], 20.00th=[ 2366], 00:25:52.361 | 30.00th=[ 2500], 40.00th=[ 2567], 50.00th=[ 2702], 60.00th=[ 2903], 00:25:52.361 | 70.00th=[ 3071], 80.00th=[ 3138], 90.00th=[ 3306], 95.00th=[ 3373], 00:25:52.361 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:52.361 | 99.99th=[ 3574] 00:25:52.361 bw ( KiB/s): min= 4096, max=92160, per=0.96%, avg=39994.53, stdev=22262.23, samples=17 00:25:52.361 iops : min= 4, max= 90, avg=39.00, stdev=21.81, samples=17 00:25:52.361 lat (msec) : 250=0.22%, 500=3.48%, 750=1.30%, 1000=1.30%, 2000=6.96% 00:25:52.361 lat (msec) : >=2000=86.74% 00:25:52.361 cpu : usr=0.02%, sys=1.32%, ctx=651, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321433: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=62, BW=62.6MiB/s (65.7MB/s)(640MiB/10222msec) 00:25:52.361 slat (usec): min=37, max=318254, avg=15733.34, stdev=36044.19 00:25:52.361 clat (msec): min=148, max=2630, avg=1869.60, stdev=510.01 00:25:52.361 lat (msec): min=320, max=2630, avg=1885.33, stdev=509.17 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 334], 5.00th=[ 1083], 10.00th=[ 1234], 20.00th=[ 1435], 00:25:52.361 | 30.00th=[ 1536], 40.00th=[ 1754], 50.00th=[ 1888], 60.00th=[ 2106], 00:25:52.361 | 70.00th=[ 2265], 80.00th=[ 2366], 90.00th=[ 2467], 95.00th=[ 2534], 00:25:52.361 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2635], 99.95th=[ 2635], 00:25:52.361 | 99.99th=[ 2635] 00:25:52.361 bw ( KiB/s): min=32768, max=120832, per=1.48%, avg=61669.12, stdev=27451.86, samples=17 00:25:52.361 iops : min= 32, max= 118, avg=60.12, stdev=26.86, samples=17 00:25:52.361 lat (msec) : 250=0.16%, 500=1.09%, 750=1.41%, 1000=0.94%, 2000=51.41% 00:25:52.361 lat (msec) : >=2000=45.00% 00:25:52.361 cpu : usr=0.02%, sys=1.62%, ctx=709, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.361 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321434: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=62, BW=62.0MiB/s (65.0MB/s)(628MiB/10129msec) 00:25:52.361 slat (usec): min=49, max=260794, avg=16039.40, stdev=53409.58 00:25:52.361 clat (msec): min=53, max=2583, avg=1885.59, stdev=552.92 00:25:52.361 lat (msec): min=168, max=2589, avg=1901.63, stdev=554.30 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 176], 5.00th=[ 414], 10.00th=[ 1020], 20.00th=[ 1804], 00:25:52.361 | 30.00th=[ 1854], 40.00th=[ 1905], 50.00th=[ 2056], 60.00th=[ 2123], 00:25:52.361 | 70.00th=[ 2165], 80.00th=[ 2299], 90.00th=[ 2400], 95.00th=[ 2433], 00:25:52.361 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 2567], 99.95th=[ 2567], 00:25:52.361 | 99.99th=[ 2567] 00:25:52.361 bw ( KiB/s): min=30720, max=94208, per=1.44%, avg=60218.88, stdev=17033.54, samples=17 00:25:52.361 iops : min= 30, max= 92, avg=58.71, stdev=16.58, samples=17 00:25:52.361 lat (msec) : 100=0.16%, 250=2.39%, 500=2.55%, 750=2.23%, 1000=2.55% 00:25:52.361 lat (msec) : 2000=36.78%, >=2000=53.34% 00:25:52.361 cpu : usr=0.02%, sys=1.59%, ctx=545, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.361 issued rwts: total=628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job2: (groupid=0, jobs=1): err= 0: pid=3321435: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=60, BW=60.1MiB/s (63.0MB/s)(614MiB/10217msec) 00:25:52.361 slat (usec): min=43, max=249916, avg=16397.87, stdev=49836.92 00:25:52.361 clat (msec): min=145, max=2455, avg=1918.01, stdev=369.43 00:25:52.361 lat (msec): min=299, max=2466, avg=1934.40, stdev=365.95 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 523], 5.00th=[ 1284], 10.00th=[ 1469], 20.00th=[ 1670], 00:25:52.361 | 30.00th=[ 1854], 40.00th=[ 1905], 50.00th=[ 2005], 60.00th=[ 2072], 00:25:52.361 | 70.00th=[ 2123], 80.00th=[ 2198], 90.00th=[ 2265], 95.00th=[ 2333], 00:25:52.361 | 99.00th=[ 2366], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2467], 00:25:52.361 | 99.99th=[ 2467] 00:25:52.361 bw ( KiB/s): min=20480, max=124928, per=1.49%, avg=62175.44, stdev=24583.71, samples=16 00:25:52.361 iops : min= 20, max= 122, avg=60.50, stdev=23.99, samples=16 00:25:52.361 lat (msec) : 250=0.16%, 500=0.81%, 750=0.98%, 1000=1.47%, 2000=44.30% 00:25:52.361 lat (msec) : >=2000=52.28% 00:25:52.361 cpu : usr=0.04%, sys=1.50%, ctx=663, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.361 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job3: (groupid=0, jobs=1): err= 0: pid=3321436: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=42, BW=42.4MiB/s (44.4MB/s)(428MiB/10106msec) 00:25:52.361 slat (usec): min=55, max=246625, avg=23414.60, stdev=44485.68 00:25:52.361 clat (msec): min=81, max=4216, avg=2599.48, stdev=1023.26 00:25:52.361 lat (msec): min=219, max=4313, avg=2622.90, stdev=1024.34 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 243], 5.00th=[ 426], 10.00th=[ 1116], 20.00th=[ 1838], 00:25:52.361 | 30.00th=[ 2005], 40.00th=[ 2299], 50.00th=[ 2567], 60.00th=[ 3171], 00:25:52.361 | 70.00th=[ 3406], 80.00th=[ 3608], 90.00th=[ 3775], 95.00th=[ 3842], 00:25:52.361 | 99.00th=[ 4111], 99.50th=[ 4144], 99.90th=[ 4212], 99.95th=[ 4212], 00:25:52.361 | 99.99th=[ 4212] 00:25:52.361 bw ( KiB/s): min= 8192, max=63488, per=0.92%, avg=38396.38, stdev=17738.33, samples=16 00:25:52.361 iops : min= 8, max= 62, avg=37.44, stdev=17.36, samples=16 00:25:52.361 lat (msec) : 100=0.23%, 250=2.80%, 500=4.21%, 750=0.93%, 1000=1.17% 00:25:52.361 lat (msec) : 2000=20.56%, >=2000=70.09% 00:25:52.361 cpu : usr=0.04%, sys=1.13%, ctx=694, majf=0, minf=32769 00:25:52.361 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:25:52.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.361 issued rwts: total=428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.361 job3: (groupid=0, jobs=1): err= 0: pid=3321437: Wed Oct 9 02:07:10 2024 00:25:52.361 read: IOPS=53, BW=53.5MiB/s (56.1MB/s)(543MiB/10150msec) 00:25:52.361 slat (usec): min=31, max=478524, avg=18452.99, stdev=47792.63 00:25:52.361 clat (msec): min=127, max=3176, avg=2031.60, stdev=672.95 00:25:52.361 lat (msec): min=323, max=3206, avg=2050.05, stdev=675.41 00:25:52.361 clat percentiles (msec): 00:25:52.361 | 1.00th=[ 326], 5.00th=[ 617], 10.00th=[ 1045], 20.00th=[ 1670], 00:25:52.361 | 30.00th=[ 1854], 40.00th=[ 1938], 50.00th=[ 1972], 60.00th=[ 2140], 00:25:52.361 | 70.00th=[ 2467], 80.00th=[ 2668], 90.00th=[ 2836], 95.00th=[ 3004], 00:25:52.361 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3171], 99.95th=[ 3171], 00:25:52.361 | 99.99th=[ 3171] 00:25:52.361 bw ( KiB/s): min=16384, max=94208, per=1.36%, avg=56644.40, stdev=19935.25, samples=15 00:25:52.361 iops : min= 16, max= 92, avg=55.20, stdev=19.43, samples=15 00:25:52.362 lat (msec) : 250=0.18%, 500=2.95%, 750=2.76%, 1000=3.13%, 2000=44.75% 00:25:52.362 lat (msec) : >=2000=46.22% 00:25:52.362 cpu : usr=0.02%, sys=1.23%, ctx=647, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321438: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=69, BW=69.2MiB/s (72.6MB/s)(701MiB/10129msec) 00:25:52.362 slat (usec): min=40, max=239534, avg=14321.09, stdev=45063.43 00:25:52.362 clat (msec): min=86, max=2862, avg=1634.69, stdev=562.45 00:25:52.362 lat (msec): min=279, max=3027, avg=1649.01, stdev=564.77 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 288], 5.00th=[ 735], 10.00th=[ 1116], 20.00th=[ 1284], 00:25:52.362 | 30.00th=[ 1334], 40.00th=[ 1401], 50.00th=[ 1469], 60.00th=[ 1569], 00:25:52.362 | 70.00th=[ 2056], 80.00th=[ 2198], 90.00th=[ 2400], 95.00th=[ 2567], 00:25:52.362 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:52.362 | 99.99th=[ 2869] 00:25:52.362 bw ( KiB/s): min=28672, max=124928, per=1.76%, avg=73340.13, stdev=29411.92, samples=16 00:25:52.362 iops : min= 28, max= 122, avg=71.56, stdev=28.81, samples=16 00:25:52.362 lat (msec) : 100=0.14%, 500=2.28%, 750=4.28%, 1000=3.00%, 2000=56.49% 00:25:52.362 lat (msec) : >=2000=33.81% 00:25:52.362 cpu : usr=0.02%, sys=1.42%, ctx=653, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321439: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=76, BW=77.0MiB/s (80.7MB/s)(782MiB/10160msec) 00:25:52.362 slat (usec): min=46, max=220157, avg=12784.38, stdev=40905.97 00:25:52.362 clat (msec): min=158, max=1975, avg=1536.26, stdev=334.12 00:25:52.362 lat (msec): min=161, max=1976, avg=1549.04, stdev=335.08 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 338], 5.00th=[ 768], 10.00th=[ 1200], 20.00th=[ 1418], 00:25:52.362 | 30.00th=[ 1485], 40.00th=[ 1552], 50.00th=[ 1603], 60.00th=[ 1670], 00:25:52.362 | 70.00th=[ 1737], 80.00th=[ 1770], 90.00th=[ 1804], 95.00th=[ 1888], 00:25:52.362 | 99.00th=[ 1955], 99.50th=[ 1972], 99.90th=[ 1972], 99.95th=[ 1972], 00:25:52.362 | 99.99th=[ 1972] 00:25:52.362 bw ( KiB/s): min=61317, max=114688, per=1.89%, avg=78881.47, stdev=17986.10, samples=17 00:25:52.362 iops : min= 59, max= 112, avg=76.88, stdev=17.60, samples=17 00:25:52.362 lat (msec) : 250=0.77%, 500=1.92%, 750=1.92%, 1000=2.81%, 2000=92.58% 00:25:52.362 cpu : usr=0.03%, sys=1.87%, ctx=671, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321440: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=50, BW=50.9MiB/s (53.4MB/s)(519MiB/10191msec) 00:25:52.362 slat (usec): min=37, max=319268, avg=19263.48, stdev=55890.78 00:25:52.362 clat (msec): min=190, max=3792, avg=2241.20, stdev=851.20 00:25:52.362 lat (msec): min=190, max=3804, avg=2260.47, stdev=854.24 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 197], 5.00th=[ 659], 10.00th=[ 919], 20.00th=[ 1720], 00:25:52.362 | 30.00th=[ 1905], 40.00th=[ 2022], 50.00th=[ 2140], 60.00th=[ 2433], 00:25:52.362 | 70.00th=[ 2802], 80.00th=[ 3004], 90.00th=[ 3440], 95.00th=[ 3540], 00:25:52.362 | 99.00th=[ 3775], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:25:52.362 | 99.99th=[ 3809] 00:25:52.362 bw ( KiB/s): min=18395, max=79872, per=1.20%, avg=50172.38, stdev=19570.38, samples=16 00:25:52.362 iops : min= 17, max= 78, avg=48.88, stdev=19.14, samples=16 00:25:52.362 lat (msec) : 250=1.54%, 500=2.89%, 750=3.08%, 1000=3.08%, 2000=28.13% 00:25:52.362 lat (msec) : >=2000=61.27% 00:25:52.362 cpu : usr=0.04%, sys=1.49%, ctx=641, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.362 issued rwts: total=519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321441: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=76, BW=76.6MiB/s (80.4MB/s)(784MiB/10230msec) 00:25:52.362 slat (usec): min=39, max=309904, avg=12830.82, stdev=34958.63 00:25:52.362 clat (msec): min=165, max=3056, avg=1538.79, stdev=563.63 00:25:52.362 lat (msec): min=250, max=3058, avg=1551.62, stdev=565.99 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 259], 5.00th=[ 735], 10.00th=[ 1099], 20.00th=[ 1200], 00:25:52.362 | 30.00th=[ 1267], 40.00th=[ 1301], 50.00th=[ 1351], 60.00th=[ 1452], 00:25:52.362 | 70.00th=[ 1620], 80.00th=[ 2005], 90.00th=[ 2567], 95.00th=[ 2635], 00:25:52.362 | 99.00th=[ 2836], 99.50th=[ 2937], 99.90th=[ 3071], 99.95th=[ 3071], 00:25:52.362 | 99.99th=[ 3071] 00:25:52.362 bw ( KiB/s): min=14336, max=118784, per=1.89%, avg=79017.88, stdev=31105.73, samples=17 00:25:52.362 iops : min= 14, max= 116, avg=77.12, stdev=30.36, samples=17 00:25:52.362 lat (msec) : 250=0.13%, 500=3.70%, 750=2.04%, 1000=1.91%, 2000=72.32% 00:25:52.362 lat (msec) : >=2000=19.90% 00:25:52.362 cpu : usr=0.02%, sys=1.80%, ctx=763, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321442: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=72, BW=72.8MiB/s (76.3MB/s)(739MiB/10157msec) 00:25:52.362 slat (usec): min=37, max=253531, avg=13562.46, stdev=35384.53 00:25:52.362 clat (msec): min=131, max=2889, avg=1575.82, stdev=480.74 00:25:52.362 lat (msec): min=261, max=2911, avg=1589.38, stdev=482.57 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 642], 5.00th=[ 936], 10.00th=[ 1217], 20.00th=[ 1267], 00:25:52.362 | 30.00th=[ 1318], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1502], 00:25:52.362 | 70.00th=[ 1603], 80.00th=[ 2072], 90.00th=[ 2400], 95.00th=[ 2567], 00:25:52.362 | 99.00th=[ 2735], 99.50th=[ 2735], 99.90th=[ 2903], 99.95th=[ 2903], 00:25:52.362 | 99.99th=[ 2903] 00:25:52.362 bw ( KiB/s): min=24576, max=122880, per=1.87%, avg=78194.94, stdev=28159.48, samples=16 00:25:52.362 iops : min= 24, max= 120, avg=76.31, stdev=27.45, samples=16 00:25:52.362 lat (msec) : 250=0.14%, 500=0.27%, 750=2.03%, 1000=4.06%, 2000=71.58% 00:25:52.362 lat (msec) : >=2000=21.92% 00:25:52.362 cpu : usr=0.00%, sys=1.53%, ctx=803, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321443: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=31, BW=31.4MiB/s (32.9MB/s)(320MiB/10189msec) 00:25:52.362 slat (usec): min=58, max=244063, avg=31366.54, stdev=49628.56 00:25:52.362 clat (msec): min=149, max=4781, avg=3392.78, stdev=1071.06 00:25:52.362 lat (msec): min=253, max=4798, avg=3424.15, stdev=1067.07 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 259], 5.00th=[ 827], 10.00th=[ 1670], 20.00th=[ 2366], 00:25:52.362 | 30.00th=[ 3540], 40.00th=[ 3708], 50.00th=[ 3809], 60.00th=[ 3876], 00:25:52.362 | 70.00th=[ 3943], 80.00th=[ 4077], 90.00th=[ 4279], 95.00th=[ 4530], 00:25:52.362 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:52.362 | 99.99th=[ 4799] 00:25:52.362 bw ( KiB/s): min=14307, max=53248, per=0.72%, avg=30239.15, stdev=12599.87, samples=13 00:25:52.362 iops : min= 13, max= 52, avg=29.38, stdev=12.36, samples=13 00:25:52.362 lat (msec) : 250=0.31%, 500=2.81%, 750=1.25%, 1000=1.25%, 2000=7.50% 00:25:52.362 lat (msec) : >=2000=86.88% 00:25:52.362 cpu : usr=0.01%, sys=1.09%, ctx=701, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:52.362 issued rwts: total=320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321444: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=47, BW=47.3MiB/s (49.6MB/s)(482MiB/10184msec) 00:25:52.362 slat (usec): min=46, max=219380, avg=20819.20, stdev=43834.48 00:25:52.362 clat (msec): min=145, max=4775, avg=2357.28, stdev=1193.05 00:25:52.362 lat (msec): min=295, max=4776, avg=2378.10, stdev=1196.90 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 300], 5.00th=[ 535], 10.00th=[ 953], 20.00th=[ 1603], 00:25:52.362 | 30.00th=[ 1754], 40.00th=[ 1787], 50.00th=[ 1972], 60.00th=[ 2333], 00:25:52.362 | 70.00th=[ 2970], 80.00th=[ 3574], 90.00th=[ 4329], 95.00th=[ 4597], 00:25:52.362 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:52.362 | 99.99th=[ 4799] 00:25:52.362 bw ( KiB/s): min=18432, max=96256, per=1.08%, avg=45288.44, stdev=23592.28, samples=16 00:25:52.362 iops : min= 18, max= 94, avg=44.06, stdev=22.91, samples=16 00:25:52.362 lat (msec) : 250=0.21%, 500=3.32%, 750=4.15%, 1000=5.39%, 2000=39.63% 00:25:52.362 lat (msec) : >=2000=47.30% 00:25:52.362 cpu : usr=0.05%, sys=1.37%, ctx=638, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=86.9% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.362 issued rwts: total=482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321445: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=48, BW=48.8MiB/s (51.2MB/s)(499MiB/10215msec) 00:25:52.362 slat (usec): min=53, max=218153, avg=20038.02, stdev=43231.64 00:25:52.362 clat (msec): min=213, max=4750, avg=2483.99, stdev=1127.21 00:25:52.362 lat (msec): min=226, max=4753, avg=2504.03, stdev=1131.27 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 368], 5.00th=[ 609], 10.00th=[ 1150], 20.00th=[ 1720], 00:25:52.362 | 30.00th=[ 1871], 40.00th=[ 1989], 50.00th=[ 2089], 60.00th=[ 2534], 00:25:52.362 | 70.00th=[ 3071], 80.00th=[ 3775], 90.00th=[ 4178], 95.00th=[ 4530], 00:25:52.362 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:52.362 | 99.99th=[ 4732] 00:25:52.362 bw ( KiB/s): min=20439, max=77824, per=1.01%, avg=42325.06, stdev=19879.74, samples=18 00:25:52.362 iops : min= 19, max= 76, avg=41.22, stdev=19.44, samples=18 00:25:52.362 lat (msec) : 250=0.40%, 500=3.21%, 750=3.21%, 1000=2.00%, 2000=36.07% 00:25:52.362 lat (msec) : >=2000=55.11% 00:25:52.362 cpu : usr=0.01%, sys=1.59%, ctx=684, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.362 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321446: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=37, BW=37.1MiB/s (38.9MB/s)(377MiB/10168msec) 00:25:52.362 slat (usec): min=35, max=229069, avg=26578.41, stdev=50740.68 00:25:52.362 clat (msec): min=145, max=5056, avg=2995.50, stdev=1015.93 00:25:52.362 lat (msec): min=252, max=5059, avg=3022.08, stdev=1011.76 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 259], 5.00th=[ 944], 10.00th=[ 1804], 20.00th=[ 2400], 00:25:52.362 | 30.00th=[ 2635], 40.00th=[ 2702], 50.00th=[ 2836], 60.00th=[ 3037], 00:25:52.362 | 70.00th=[ 3373], 80.00th=[ 3809], 90.00th=[ 4597], 95.00th=[ 4732], 00:25:52.362 | 99.00th=[ 4933], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:52.362 | 99.99th=[ 5067] 00:25:52.362 bw ( KiB/s): min=10240, max=69632, per=0.81%, avg=33989.33, stdev=17667.98, samples=15 00:25:52.362 iops : min= 10, max= 68, avg=33.07, stdev=17.31, samples=15 00:25:52.362 lat (msec) : 250=0.27%, 500=2.12%, 750=1.59%, 1000=1.06%, 2000=5.57% 00:25:52.362 lat (msec) : >=2000=89.39% 00:25:52.362 cpu : usr=0.01%, sys=1.08%, ctx=638, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.362 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321447: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=56, BW=56.5MiB/s (59.2MB/s)(575MiB/10179msec) 00:25:52.362 slat (usec): min=47, max=234566, avg=17390.00, stdev=39704.03 00:25:52.362 clat (msec): min=176, max=2691, avg=1966.41, stdev=580.32 00:25:52.362 lat (msec): min=178, max=2698, avg=1983.80, stdev=580.76 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 257], 5.00th=[ 667], 10.00th=[ 953], 20.00th=[ 1653], 00:25:52.362 | 30.00th=[ 1871], 40.00th=[ 2005], 50.00th=[ 2165], 60.00th=[ 2265], 00:25:52.362 | 70.00th=[ 2366], 80.00th=[ 2433], 90.00th=[ 2500], 95.00th=[ 2567], 00:25:52.362 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2702], 99.95th=[ 2702], 00:25:52.362 | 99.99th=[ 2702] 00:25:52.362 bw ( KiB/s): min=28672, max=96256, per=1.46%, avg=61158.13, stdev=22760.12, samples=15 00:25:52.362 iops : min= 28, max= 94, avg=59.60, stdev=22.22, samples=15 00:25:52.362 lat (msec) : 250=0.52%, 500=1.39%, 750=5.04%, 1000=5.91%, 2000=26.43% 00:25:52.362 lat (msec) : >=2000=60.70% 00:25:52.362 cpu : usr=0.02%, sys=1.28%, ctx=695, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.362 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job3: (groupid=0, jobs=1): err= 0: pid=3321448: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=35, BW=35.9MiB/s (37.7MB/s)(367MiB/10211msec) 00:25:52.362 slat (usec): min=57, max=214600, avg=27408.50, stdev=51104.59 00:25:52.362 clat (msec): min=150, max=4637, avg=3029.23, stdev=985.40 00:25:52.362 lat (msec): min=251, max=4647, avg=3056.64, stdev=978.73 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 259], 5.00th=[ 1167], 10.00th=[ 1804], 20.00th=[ 2198], 00:25:52.362 | 30.00th=[ 2534], 40.00th=[ 2836], 50.00th=[ 3272], 60.00th=[ 3540], 00:25:52.362 | 70.00th=[ 3641], 80.00th=[ 3809], 90.00th=[ 4144], 95.00th=[ 4396], 00:25:52.362 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:52.362 | 99.99th=[ 4665] 00:25:52.362 bw ( KiB/s): min= 4096, max=83968, per=0.84%, avg=34950.00, stdev=23291.80, samples=14 00:25:52.362 iops : min= 4, max= 82, avg=34.00, stdev=22.71, samples=14 00:25:52.362 lat (msec) : 250=0.27%, 500=2.18%, 750=1.36%, 1000=0.82%, 2000=9.81% 00:25:52.362 lat (msec) : >=2000=85.56% 00:25:52.362 cpu : usr=0.01%, sys=1.32%, ctx=644, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.362 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job4: (groupid=0, jobs=1): err= 0: pid=3321449: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=39, BW=39.6MiB/s (41.5MB/s)(404MiB/10206msec) 00:25:52.362 slat (usec): min=39, max=228893, avg=24819.53, stdev=45716.31 00:25:52.362 clat (msec): min=176, max=4571, avg=2852.00, stdev=1159.71 00:25:52.362 lat (msec): min=301, max=4576, avg=2876.81, stdev=1161.14 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 409], 5.00th=[ 902], 10.00th=[ 1586], 20.00th=[ 1787], 00:25:52.362 | 30.00th=[ 1938], 40.00th=[ 2198], 50.00th=[ 2735], 60.00th=[ 3373], 00:25:52.362 | 70.00th=[ 3876], 80.00th=[ 4212], 90.00th=[ 4329], 95.00th=[ 4396], 00:25:52.362 | 99.00th=[ 4463], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:52.362 | 99.99th=[ 4597] 00:25:52.362 bw ( KiB/s): min=20439, max=77668, per=0.85%, avg=35303.56, stdev=15402.57, samples=16 00:25:52.362 iops : min= 19, max= 75, avg=34.19, stdev=14.99, samples=16 00:25:52.362 lat (msec) : 250=0.25%, 500=1.49%, 750=1.73%, 1000=1.73%, 2000=26.73% 00:25:52.362 lat (msec) : >=2000=68.07% 00:25:52.362 cpu : usr=0.01%, sys=1.46%, ctx=655, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.362 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job4: (groupid=0, jobs=1): err= 0: pid=3321450: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=44, BW=44.7MiB/s (46.9MB/s)(454MiB/10154msec) 00:25:52.362 slat (usec): min=44, max=235780, avg=22063.55, stdev=48374.00 00:25:52.362 clat (msec): min=134, max=3821, avg=2614.42, stdev=801.98 00:25:52.362 lat (msec): min=242, max=3878, avg=2636.49, stdev=802.78 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 558], 5.00th=[ 793], 10.00th=[ 1469], 20.00th=[ 1989], 00:25:52.362 | 30.00th=[ 2400], 40.00th=[ 2601], 50.00th=[ 2769], 60.00th=[ 3004], 00:25:52.362 | 70.00th=[ 3104], 80.00th=[ 3205], 90.00th=[ 3540], 95.00th=[ 3675], 00:25:52.362 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:25:52.362 | 99.99th=[ 3809] 00:25:52.362 bw ( KiB/s): min=22528, max=79872, per=1.00%, avg=41719.81, stdev=17492.26, samples=16 00:25:52.362 iops : min= 22, max= 78, avg=40.69, stdev=17.00, samples=16 00:25:52.362 lat (msec) : 250=0.44%, 500=0.44%, 750=3.30%, 1000=1.76%, 2000=14.32% 00:25:52.362 lat (msec) : >=2000=79.74% 00:25:52.362 cpu : usr=0.01%, sys=1.30%, ctx=616, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.1% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.362 issued rwts: total=454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job4: (groupid=0, jobs=1): err= 0: pid=3321451: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=94, BW=94.4MiB/s (99.0MB/s)(957MiB/10139msec) 00:25:52.362 slat (usec): min=46, max=189845, avg=10558.57, stdev=34088.45 00:25:52.362 clat (msec): min=30, max=1696, avg=1260.99, stdev=257.00 00:25:52.362 lat (msec): min=172, max=1734, avg=1271.55, stdev=257.44 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 188], 5.00th=[ 684], 10.00th=[ 1099], 20.00th=[ 1150], 00:25:52.362 | 30.00th=[ 1234], 40.00th=[ 1250], 50.00th=[ 1301], 60.00th=[ 1334], 00:25:52.362 | 70.00th=[ 1385], 80.00th=[ 1452], 90.00th=[ 1485], 95.00th=[ 1552], 00:25:52.362 | 99.00th=[ 1636], 99.50th=[ 1636], 99.90th=[ 1703], 99.95th=[ 1703], 00:25:52.362 | 99.99th=[ 1703] 00:25:52.362 bw ( KiB/s): min=26624, max=124928, per=2.26%, avg=94318.33, stdev=24187.98, samples=18 00:25:52.362 iops : min= 26, max= 122, avg=92.06, stdev=23.57, samples=18 00:25:52.362 lat (msec) : 50=0.10%, 250=1.67%, 500=1.67%, 750=1.67%, 1000=3.34% 00:25:52.362 lat (msec) : 2000=91.54% 00:25:52.362 cpu : usr=0.05%, sys=1.84%, ctx=920, majf=0, minf=32769 00:25:52.362 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:25:52.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.362 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:52.362 issued rwts: total=957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.362 job4: (groupid=0, jobs=1): err= 0: pid=3321452: Wed Oct 9 02:07:10 2024 00:25:52.362 read: IOPS=31, BW=31.5MiB/s (33.0MB/s)(323MiB/10252msec) 00:25:52.362 slat (usec): min=56, max=248085, avg=31131.18, stdev=51983.52 00:25:52.362 clat (msec): min=194, max=5772, avg=3672.97, stdev=1503.77 00:25:52.362 lat (msec): min=308, max=5780, avg=3704.10, stdev=1505.01 00:25:52.362 clat percentiles (msec): 00:25:52.362 | 1.00th=[ 330], 5.00th=[ 869], 10.00th=[ 1552], 20.00th=[ 2467], 00:25:52.362 | 30.00th=[ 2836], 40.00th=[ 3037], 50.00th=[ 3608], 60.00th=[ 4396], 00:25:52.362 | 70.00th=[ 5067], 80.00th=[ 5269], 90.00th=[ 5537], 95.00th=[ 5537], 00:25:52.363 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:52.363 | 99.99th=[ 5805] 00:25:52.363 bw ( KiB/s): min=14336, max=40960, per=0.60%, avg=24962.25, stdev=8299.26, samples=16 00:25:52.363 iops : min= 14, max= 40, avg=24.38, stdev= 8.11, samples=16 00:25:52.363 lat (msec) : 250=0.31%, 500=1.86%, 750=1.55%, 1000=2.48%, 2000=7.74% 00:25:52.363 lat (msec) : >=2000=86.07% 00:25:52.363 cpu : usr=0.03%, sys=1.36%, ctx=670, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.5% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:52.363 issued rwts: total=323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321453: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=64, BW=64.6MiB/s (67.7MB/s)(664MiB/10278msec) 00:25:52.363 slat (usec): min=43, max=257757, avg=15203.63, stdev=38244.88 00:25:52.363 clat (msec): min=179, max=2376, avg=1854.44, stdev=404.38 00:25:52.363 lat (msec): min=402, max=2377, avg=1869.65, stdev=404.11 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 405], 5.00th=[ 835], 10.00th=[ 1301], 20.00th=[ 1720], 00:25:52.363 | 30.00th=[ 1838], 40.00th=[ 1888], 50.00th=[ 1938], 60.00th=[ 2005], 00:25:52.363 | 70.00th=[ 2056], 80.00th=[ 2140], 90.00th=[ 2198], 95.00th=[ 2232], 00:25:52.363 | 99.00th=[ 2299], 99.50th=[ 2299], 99.90th=[ 2366], 99.95th=[ 2366], 00:25:52.363 | 99.99th=[ 2366] 00:25:52.363 bw ( KiB/s): min= 6144, max=86016, per=1.46%, avg=60977.83, stdev=16025.93, samples=18 00:25:52.363 iops : min= 6, max= 84, avg=59.50, stdev=15.64, samples=18 00:25:52.363 lat (msec) : 250=0.15%, 500=2.26%, 750=2.26%, 1000=1.36%, 2000=53.31% 00:25:52.363 lat (msec) : >=2000=40.66% 00:25:52.363 cpu : usr=0.02%, sys=1.86%, ctx=617, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.363 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321454: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=70, BW=70.9MiB/s (74.4MB/s)(722MiB/10179msec) 00:25:52.363 slat (usec): min=37, max=237990, avg=13941.08, stdev=44601.22 00:25:52.363 clat (msec): min=109, max=2066, avg=1646.41, stdev=355.10 00:25:52.363 lat (msec): min=317, max=2069, avg=1660.35, stdev=355.65 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 321], 5.00th=[ 793], 10.00th=[ 1045], 20.00th=[ 1586], 00:25:52.363 | 30.00th=[ 1653], 40.00th=[ 1687], 50.00th=[ 1754], 60.00th=[ 1787], 00:25:52.363 | 70.00th=[ 1804], 80.00th=[ 1854], 90.00th=[ 1921], 95.00th=[ 1972], 00:25:52.363 | 99.00th=[ 2056], 99.50th=[ 2056], 99.90th=[ 2072], 99.95th=[ 2072], 00:25:52.363 | 99.99th=[ 2072] 00:25:52.363 bw ( KiB/s): min=24576, max=104239, per=1.71%, avg=71529.82, stdev=17114.70, samples=17 00:25:52.363 iops : min= 24, max= 101, avg=69.71, stdev=16.61, samples=17 00:25:52.363 lat (msec) : 250=0.14%, 500=2.08%, 750=2.22%, 1000=2.22%, 2000=90.17% 00:25:52.363 lat (msec) : >=2000=3.19% 00:25:52.363 cpu : usr=0.06%, sys=1.97%, ctx=659, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.363 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321455: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=57, BW=57.2MiB/s (59.9MB/s)(584MiB/10218msec) 00:25:52.363 slat (usec): min=32, max=233470, avg=17272.00, stdev=43800.68 00:25:52.363 clat (msec): min=128, max=3782, avg=2066.08, stdev=817.18 00:25:52.363 lat (msec): min=288, max=3784, avg=2083.35, stdev=819.84 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 296], 5.00th=[ 481], 10.00th=[ 978], 20.00th=[ 1469], 00:25:52.363 | 30.00th=[ 1620], 40.00th=[ 1854], 50.00th=[ 2056], 60.00th=[ 2140], 00:25:52.363 | 70.00th=[ 2601], 80.00th=[ 2802], 90.00th=[ 3205], 95.00th=[ 3406], 00:25:52.363 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3775], 99.95th=[ 3775], 00:25:52.363 | 99.99th=[ 3775] 00:25:52.363 bw ( KiB/s): min=22528, max=104448, per=1.32%, avg=54935.24, stdev=29289.65, samples=17 00:25:52.363 iops : min= 22, max= 102, avg=53.53, stdev=28.70, samples=17 00:25:52.363 lat (msec) : 250=0.17%, 500=5.31%, 750=2.91%, 1000=2.23%, 2000=36.99% 00:25:52.363 lat (msec) : >=2000=52.40% 00:25:52.363 cpu : usr=0.00%, sys=1.58%, ctx=680, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.363 issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321456: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=52, BW=52.9MiB/s (55.5MB/s)(542MiB/10247msec) 00:25:52.363 slat (usec): min=39, max=228270, avg=18623.89, stdev=40207.70 00:25:52.363 clat (msec): min=149, max=3559, avg=2100.12, stdev=741.34 00:25:52.363 lat (msec): min=378, max=3560, avg=2118.75, stdev=741.77 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 393], 5.00th=[ 936], 10.00th=[ 1351], 20.00th=[ 1636], 00:25:52.363 | 30.00th=[ 1737], 40.00th=[ 1770], 50.00th=[ 1955], 60.00th=[ 2039], 00:25:52.363 | 70.00th=[ 2232], 80.00th=[ 3004], 90.00th=[ 3339], 95.00th=[ 3440], 00:25:52.363 | 99.00th=[ 3507], 99.50th=[ 3540], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:52.363 | 99.99th=[ 3574] 00:25:52.363 bw ( KiB/s): min=18432, max=94208, per=1.27%, avg=52972.25, stdev=23850.12, samples=16 00:25:52.363 iops : min= 18, max= 92, avg=51.56, stdev=23.26, samples=16 00:25:52.363 lat (msec) : 250=0.18%, 500=0.92%, 750=2.40%, 1000=2.77%, 2000=49.45% 00:25:52.363 lat (msec) : >=2000=44.28% 00:25:52.363 cpu : usr=0.01%, sys=1.32%, ctx=653, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.363 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321457: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=38, BW=39.0MiB/s (40.8MB/s)(399MiB/10242msec) 00:25:52.363 slat (usec): min=60, max=246649, avg=25136.70, stdev=51066.70 00:25:52.363 clat (msec): min=210, max=5028, avg=2920.88, stdev=1388.95 00:25:52.363 lat (msec): min=300, max=5037, avg=2946.02, stdev=1394.08 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 317], 5.00th=[ 575], 10.00th=[ 1036], 20.00th=[ 1519], 00:25:52.363 | 30.00th=[ 1955], 40.00th=[ 2265], 50.00th=[ 2903], 60.00th=[ 3540], 00:25:52.363 | 70.00th=[ 4010], 80.00th=[ 4530], 90.00th=[ 4665], 95.00th=[ 4866], 00:25:52.363 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:25:52.363 | 99.99th=[ 5000] 00:25:52.363 bw ( KiB/s): min=16384, max=83800, per=0.89%, avg=36985.47, stdev=20823.87, samples=15 00:25:52.363 iops : min= 16, max= 81, avg=36.00, stdev=20.23, samples=15 00:25:52.363 lat (msec) : 250=0.25%, 500=2.76%, 750=2.26%, 1000=4.01%, 2000=21.80% 00:25:52.363 lat (msec) : >=2000=68.92% 00:25:52.363 cpu : usr=0.02%, sys=1.34%, ctx=671, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.363 issued rwts: total=399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321458: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=29, BW=29.7MiB/s (31.2MB/s)(302MiB/10160msec) 00:25:52.363 slat (usec): min=105, max=258124, avg=33325.33, stdev=61170.36 00:25:52.363 clat (msec): min=94, max=4876, avg=3247.23, stdev=1405.38 00:25:52.363 lat (msec): min=170, max=4879, avg=3280.56, stdev=1407.55 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 174], 5.00th=[ 418], 10.00th=[ 659], 20.00th=[ 1603], 00:25:52.363 | 30.00th=[ 2903], 40.00th=[ 3809], 50.00th=[ 3943], 60.00th=[ 4010], 00:25:52.363 | 70.00th=[ 4144], 80.00th=[ 4329], 90.00th=[ 4463], 95.00th=[ 4597], 00:25:52.363 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:52.363 | 99.99th=[ 4866] 00:25:52.363 bw ( KiB/s): min=16384, max=51200, per=0.78%, avg=32395.64, stdev=10673.92, samples=11 00:25:52.363 iops : min= 16, max= 50, avg=31.64, stdev=10.42, samples=11 00:25:52.363 lat (msec) : 100=0.33%, 250=4.30%, 500=2.98%, 750=2.98%, 1000=2.65% 00:25:52.363 lat (msec) : 2000=8.94%, >=2000=77.81% 00:25:52.363 cpu : usr=0.01%, sys=1.00%, ctx=688, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:52.363 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321459: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=40, BW=40.8MiB/s (42.8MB/s)(417MiB/10212msec) 00:25:52.363 slat (usec): min=36, max=413962, avg=24135.33, stdev=56600.34 00:25:52.363 clat (msec): min=145, max=4158, avg=2560.78, stdev=1051.00 00:25:52.363 lat (msec): min=330, max=4161, avg=2584.91, stdev=1051.10 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 330], 5.00th=[ 550], 10.00th=[ 1250], 20.00th=[ 1620], 00:25:52.363 | 30.00th=[ 1838], 40.00th=[ 2123], 50.00th=[ 2668], 60.00th=[ 3171], 00:25:52.363 | 70.00th=[ 3440], 80.00th=[ 3641], 90.00th=[ 3809], 95.00th=[ 3910], 00:25:52.363 | 99.00th=[ 3977], 99.50th=[ 4044], 99.90th=[ 4144], 99.95th=[ 4144], 00:25:52.363 | 99.99th=[ 4144] 00:25:52.363 bw ( KiB/s): min= 2048, max=92160, per=1.09%, avg=45501.77, stdev=29166.50, samples=13 00:25:52.363 iops : min= 2, max= 90, avg=44.23, stdev=28.39, samples=13 00:25:52.363 lat (msec) : 250=0.24%, 500=4.56%, 750=1.20%, 1000=1.20%, 2000=30.46% 00:25:52.363 lat (msec) : >=2000=62.35% 00:25:52.363 cpu : usr=0.03%, sys=1.33%, ctx=635, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.363 issued rwts: total=417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321460: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=81, BW=81.1MiB/s (85.1MB/s)(824MiB/10159msec) 00:25:52.363 slat (usec): min=52, max=166215, avg=12183.48, stdev=21611.25 00:25:52.363 clat (msec): min=112, max=1911, avg=1450.69, stdev=351.14 00:25:52.363 lat (msec): min=278, max=1929, avg=1462.87, stdev=351.89 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 279], 5.00th=[ 625], 10.00th=[ 1150], 20.00th=[ 1234], 00:25:52.363 | 30.00th=[ 1301], 40.00th=[ 1401], 50.00th=[ 1536], 60.00th=[ 1636], 00:25:52.363 | 70.00th=[ 1703], 80.00th=[ 1754], 90.00th=[ 1787], 95.00th=[ 1821], 00:25:52.363 | 99.00th=[ 1871], 99.50th=[ 1888], 99.90th=[ 1905], 99.95th=[ 1905], 00:25:52.363 | 99.99th=[ 1905] 00:25:52.363 bw ( KiB/s): min=59392, max=114688, per=2.01%, avg=83821.06, stdev=16860.40, samples=17 00:25:52.363 iops : min= 58, max= 112, avg=81.71, stdev=16.56, samples=17 00:25:52.363 lat (msec) : 250=0.12%, 500=3.76%, 750=1.82%, 1000=2.18%, 2000=92.11% 00:25:52.363 cpu : usr=0.06%, sys=2.12%, ctx=709, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:52.363 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job4: (groupid=0, jobs=1): err= 0: pid=3321461: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=32, BW=32.3MiB/s (33.9MB/s)(332MiB/10277msec) 00:25:52.363 slat (usec): min=35, max=250116, avg=30360.51, stdev=62142.41 00:25:52.363 clat (msec): min=195, max=6029, avg=3421.70, stdev=1751.50 00:25:52.363 lat (msec): min=322, max=6045, avg=3452.06, stdev=1756.67 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 330], 5.00th=[ 978], 10.00th=[ 1183], 20.00th=[ 1737], 00:25:52.363 | 30.00th=[ 2106], 40.00th=[ 2567], 50.00th=[ 3071], 60.00th=[ 4212], 00:25:52.363 | 70.00th=[ 4866], 80.00th=[ 5403], 90.00th=[ 5805], 95.00th=[ 5873], 00:25:52.363 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:25:52.363 | 99.99th=[ 6007] 00:25:52.363 bw ( KiB/s): min=10240, max=96256, per=0.77%, avg=32134.00, stdev=24795.11, samples=13 00:25:52.363 iops : min= 10, max= 94, avg=31.31, stdev=24.24, samples=13 00:25:52.363 lat (msec) : 250=0.30%, 500=1.51%, 750=1.51%, 1000=2.11%, 2000=23.80% 00:25:52.363 lat (msec) : >=2000=70.78% 00:25:52.363 cpu : usr=0.03%, sys=1.24%, ctx=691, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.0% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:52.363 issued rwts: total=332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job5: (groupid=0, jobs=1): err= 0: pid=3321462: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=53, BW=53.5MiB/s (56.0MB/s)(546MiB/10215msec) 00:25:52.363 slat (usec): min=37, max=210679, avg=18319.09, stdev=41584.72 00:25:52.363 clat (msec): min=209, max=4631, avg=2190.11, stdev=1000.40 00:25:52.363 lat (msec): min=373, max=4775, avg=2208.43, stdev=1003.55 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 435], 5.00th=[ 927], 10.00th=[ 1284], 20.00th=[ 1385], 00:25:52.363 | 30.00th=[ 1519], 40.00th=[ 1670], 50.00th=[ 1754], 60.00th=[ 1888], 00:25:52.363 | 70.00th=[ 2769], 80.00th=[ 3205], 90.00th=[ 3742], 95.00th=[ 4178], 00:25:52.363 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:52.363 | 99.99th=[ 4665] 00:25:52.363 bw ( KiB/s): min= 8192, max=96256, per=1.21%, avg=50445.53, stdev=29747.42, samples=17 00:25:52.363 iops : min= 8, max= 94, avg=48.94, stdev=29.13, samples=17 00:25:52.363 lat (msec) : 250=0.18%, 500=1.65%, 750=1.83%, 1000=1.47%, 2000=55.68% 00:25:52.363 lat (msec) : >=2000=39.19% 00:25:52.363 cpu : usr=0.02%, sys=1.45%, ctx=682, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.363 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job5: (groupid=0, jobs=1): err= 0: pid=3321463: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(426MiB/10195msec) 00:25:52.363 slat (usec): min=45, max=240664, avg=23470.10, stdev=51401.40 00:25:52.363 clat (msec): min=194, max=3792, avg=2660.82, stdev=742.26 00:25:52.363 lat (msec): min=362, max=3892, avg=2684.29, stdev=739.92 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 414], 5.00th=[ 894], 10.00th=[ 1301], 20.00th=[ 2534], 00:25:52.363 | 30.00th=[ 2635], 40.00th=[ 2702], 50.00th=[ 2802], 60.00th=[ 2836], 00:25:52.363 | 70.00th=[ 3037], 80.00th=[ 3205], 90.00th=[ 3406], 95.00th=[ 3540], 00:25:52.363 | 99.00th=[ 3708], 99.50th=[ 3742], 99.90th=[ 3809], 99.95th=[ 3809], 00:25:52.363 | 99.99th=[ 3809] 00:25:52.363 bw ( KiB/s): min=22528, max=83968, per=0.98%, avg=40824.13, stdev=13598.38, samples=15 00:25:52.363 iops : min= 22, max= 82, avg=39.67, stdev=13.35, samples=15 00:25:52.363 lat (msec) : 250=0.23%, 500=1.64%, 750=1.88%, 1000=2.35%, 2000=8.92% 00:25:52.363 lat (msec) : >=2000=84.98% 00:25:52.363 cpu : usr=0.01%, sys=1.23%, ctx=692, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:52.363 issued rwts: total=426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job5: (groupid=0, jobs=1): err= 0: pid=3321464: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=37, BW=37.3MiB/s (39.1MB/s)(379MiB/10152msec) 00:25:52.363 slat (usec): min=33, max=280552, avg=26537.03, stdev=48348.72 00:25:52.363 clat (msec): min=92, max=4368, avg=2682.25, stdev=1072.39 00:25:52.363 lat (msec): min=170, max=4416, avg=2708.79, stdev=1075.78 00:25:52.363 clat percentiles (msec): 00:25:52.363 | 1.00th=[ 176], 5.00th=[ 514], 10.00th=[ 827], 20.00th=[ 1485], 00:25:52.363 | 30.00th=[ 2500], 40.00th=[ 2970], 50.00th=[ 3037], 60.00th=[ 3138], 00:25:52.363 | 70.00th=[ 3306], 80.00th=[ 3540], 90.00th=[ 3775], 95.00th=[ 3910], 00:25:52.363 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4396], 99.95th=[ 4396], 00:25:52.363 | 99.99th=[ 4396] 00:25:52.363 bw ( KiB/s): min=14336, max=71680, per=1.03%, avg=42837.33, stdev=17299.97, samples=12 00:25:52.363 iops : min= 14, max= 70, avg=41.83, stdev=16.89, samples=12 00:25:52.363 lat (msec) : 100=0.26%, 250=1.85%, 500=1.32%, 750=6.07%, 1000=4.49% 00:25:52.363 lat (msec) : 2000=8.44%, >=2000=77.57% 00:25:52.363 cpu : usr=0.02%, sys=1.00%, ctx=654, majf=0, minf=32769 00:25:52.363 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:25:52.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.363 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.363 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.363 job5: (groupid=0, jobs=1): err= 0: pid=3321465: Wed Oct 9 02:07:10 2024 00:25:52.363 read: IOPS=71, BW=71.9MiB/s (75.4MB/s)(729MiB/10133msec) 00:25:52.364 slat (usec): min=46, max=239855, avg=13802.05, stdev=47562.54 00:25:52.364 clat (msec): min=68, max=2326, avg=1655.76, stdev=399.82 00:25:52.364 lat (msec): min=236, max=2327, avg=1669.56, stdev=400.65 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 247], 5.00th=[ 667], 10.00th=[ 1116], 20.00th=[ 1569], 00:25:52.364 | 30.00th=[ 1620], 40.00th=[ 1687], 50.00th=[ 1737], 60.00th=[ 1770], 00:25:52.364 | 70.00th=[ 1871], 80.00th=[ 1938], 90.00th=[ 2022], 95.00th=[ 2106], 00:25:52.364 | 99.00th=[ 2265], 99.50th=[ 2299], 99.90th=[ 2333], 99.95th=[ 2333], 00:25:52.364 | 99.99th=[ 2333] 00:25:52.364 bw ( KiB/s): min= 6144, max=112640, per=1.64%, avg=68387.28, stdev=22367.94, samples=18 00:25:52.364 iops : min= 6, max= 110, avg=66.78, stdev=21.85, samples=18 00:25:52.364 lat (msec) : 100=0.14%, 250=1.37%, 500=2.74%, 750=2.19%, 1000=2.06% 00:25:52.364 lat (msec) : 2000=79.56%, >=2000=11.93% 00:25:52.364 cpu : usr=0.00%, sys=1.78%, ctx=673, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321466: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=52, BW=52.2MiB/s (54.7MB/s)(536MiB/10271msec) 00:25:52.364 slat (usec): min=33, max=270068, avg=18969.03, stdev=51360.92 00:25:52.364 clat (msec): min=101, max=3691, avg=2161.67, stdev=799.40 00:25:52.364 lat (msec): min=300, max=3700, avg=2180.64, stdev=801.37 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 305], 5.00th=[ 531], 10.00th=[ 978], 20.00th=[ 1636], 00:25:52.364 | 30.00th=[ 1854], 40.00th=[ 2106], 50.00th=[ 2232], 60.00th=[ 2333], 00:25:52.364 | 70.00th=[ 2467], 80.00th=[ 2635], 90.00th=[ 3339], 95.00th=[ 3540], 00:25:52.364 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 3708], 99.95th=[ 3708], 00:25:52.364 | 99.99th=[ 3708] 00:25:52.364 bw ( KiB/s): min= 4096, max=94019, per=1.33%, avg=55699.27, stdev=23761.31, samples=15 00:25:52.364 iops : min= 4, max= 91, avg=54.33, stdev=23.11, samples=15 00:25:52.364 lat (msec) : 250=0.19%, 500=2.80%, 750=3.17%, 1000=5.41%, 2000=23.69% 00:25:52.364 lat (msec) : >=2000=64.74% 00:25:52.364 cpu : usr=0.00%, sys=1.39%, ctx=735, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321467: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=57, BW=57.6MiB/s (60.4MB/s)(590MiB/10249msec) 00:25:52.364 slat (usec): min=36, max=293848, avg=17035.80, stdev=39084.55 00:25:52.364 clat (msec): min=194, max=3217, avg=2040.24, stdev=751.69 00:25:52.364 lat (msec): min=361, max=3219, avg=2057.27, stdev=752.50 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 414], 5.00th=[ 1083], 10.00th=[ 1099], 20.00th=[ 1217], 00:25:52.364 | 30.00th=[ 1334], 40.00th=[ 1754], 50.00th=[ 2089], 60.00th=[ 2400], 00:25:52.364 | 70.00th=[ 2635], 80.00th=[ 2836], 90.00th=[ 3037], 95.00th=[ 3104], 00:25:52.364 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3205], 99.95th=[ 3205], 00:25:52.364 | 99.99th=[ 3205] 00:25:52.364 bw ( KiB/s): min=16384, max=124928, per=1.42%, avg=59138.81, stdev=34175.49, samples=16 00:25:52.364 iops : min= 16, max= 122, avg=57.75, stdev=33.38, samples=16 00:25:52.364 lat (msec) : 250=0.17%, 500=1.36%, 750=1.36%, 1000=1.69%, 2000=41.69% 00:25:52.364 lat (msec) : >=2000=53.73% 00:25:52.364 cpu : usr=0.05%, sys=1.58%, ctx=846, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321468: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=32, BW=32.4MiB/s (34.0MB/s)(330MiB/10176msec) 00:25:52.364 slat (usec): min=123, max=224959, avg=30341.77, stdev=35702.26 00:25:52.364 clat (msec): min=159, max=4497, avg=3179.46, stdev=1255.65 00:25:52.364 lat (msec): min=231, max=4557, avg=3209.80, stdev=1258.28 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 236], 5.00th=[ 600], 10.00th=[ 1045], 20.00th=[ 1838], 00:25:52.364 | 30.00th=[ 2735], 40.00th=[ 3406], 50.00th=[ 3641], 60.00th=[ 4010], 00:25:52.364 | 70.00th=[ 4111], 80.00th=[ 4245], 90.00th=[ 4329], 95.00th=[ 4463], 00:25:52.364 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:52.364 | 99.99th=[ 4530] 00:25:52.364 bw ( KiB/s): min=26624, max=45056, per=0.83%, avg=34468.50, stdev=6464.07, samples=12 00:25:52.364 iops : min= 26, max= 44, avg=33.58, stdev= 6.29, samples=12 00:25:52.364 lat (msec) : 250=1.21%, 500=2.73%, 750=3.03%, 1000=2.42%, 2000=12.42% 00:25:52.364 lat (msec) : >=2000=78.18% 00:25:52.364 cpu : usr=0.02%, sys=1.52%, ctx=626, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:52.364 issued rwts: total=330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321469: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=53, BW=53.2MiB/s (55.8MB/s)(543MiB/10197msec) 00:25:52.364 slat (usec): min=47, max=188175, avg=18484.42, stdev=35363.44 00:25:52.364 clat (msec): min=156, max=3203, avg=2073.19, stdev=702.88 00:25:52.364 lat (msec): min=198, max=3301, avg=2091.67, stdev=703.23 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 300], 5.00th=[ 793], 10.00th=[ 1200], 20.00th=[ 1385], 00:25:52.364 | 30.00th=[ 1703], 40.00th=[ 1921], 50.00th=[ 2056], 60.00th=[ 2333], 00:25:52.364 | 70.00th=[ 2567], 80.00th=[ 2735], 90.00th=[ 2970], 95.00th=[ 3071], 00:25:52.364 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 3205], 00:25:52.364 | 99.99th=[ 3205] 00:25:52.364 bw ( KiB/s): min= 8175, max=122880, per=1.36%, avg=56672.20, stdev=31328.82, samples=15 00:25:52.364 iops : min= 7, max= 120, avg=55.13, stdev=30.71, samples=15 00:25:52.364 lat (msec) : 250=0.37%, 500=2.21%, 750=1.66%, 1000=1.47%, 2000=40.70% 00:25:52.364 lat (msec) : >=2000=53.59% 00:25:52.364 cpu : usr=0.05%, sys=1.24%, ctx=804, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321470: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=54, BW=54.2MiB/s (56.8MB/s)(549MiB/10137msec) 00:25:52.364 slat (usec): min=43, max=211681, avg=18381.06, stdev=42188.99 00:25:52.364 clat (msec): min=42, max=3168, avg=2056.86, stdev=869.52 00:25:52.364 lat (msec): min=189, max=3337, avg=2075.24, stdev=873.37 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 201], 5.00th=[ 388], 10.00th=[ 693], 20.00th=[ 1267], 00:25:52.364 | 30.00th=[ 1351], 40.00th=[ 2232], 50.00th=[ 2400], 60.00th=[ 2500], 00:25:52.364 | 70.00th=[ 2735], 80.00th=[ 2836], 90.00th=[ 2970], 95.00th=[ 3037], 00:25:52.364 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3171], 99.95th=[ 3171], 00:25:52.364 | 99.99th=[ 3171] 00:25:52.364 bw ( KiB/s): min=14336, max=96256, per=1.29%, avg=53874.62, stdev=28332.55, samples=16 00:25:52.364 iops : min= 14, max= 94, avg=52.44, stdev=27.79, samples=16 00:25:52.364 lat (msec) : 50=0.18%, 250=2.73%, 500=2.91%, 750=5.46%, 1000=5.65% 00:25:52.364 lat (msec) : 2000=20.04%, >=2000=63.02% 00:25:52.364 cpu : usr=0.05%, sys=1.32%, ctx=770, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321471: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=34, BW=34.9MiB/s (36.5MB/s)(355MiB/10186msec) 00:25:52.364 slat (usec): min=161, max=215407, avg=28167.46, stdev=36989.81 00:25:52.364 clat (msec): min=183, max=4351, avg=3196.68, stdev=1087.73 00:25:52.364 lat (msec): min=222, max=4360, avg=3224.85, stdev=1087.76 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 279], 5.00th=[ 735], 10.00th=[ 1301], 20.00th=[ 2265], 00:25:52.364 | 30.00th=[ 3104], 40.00th=[ 3339], 50.00th=[ 3608], 60.00th=[ 3809], 00:25:52.364 | 70.00th=[ 3977], 80.00th=[ 4044], 90.00th=[ 4144], 95.00th=[ 4212], 00:25:52.364 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:25:52.364 | 99.99th=[ 4329] 00:25:52.364 bw ( KiB/s): min=12288, max=49053, per=0.75%, avg=31118.07, stdev=8325.32, samples=15 00:25:52.364 iops : min= 12, max= 47, avg=30.27, stdev= 7.95, samples=15 00:25:52.364 lat (msec) : 250=0.85%, 500=2.54%, 750=1.69%, 1000=2.25%, 2000=8.73% 00:25:52.364 lat (msec) : >=2000=83.94% 00:25:52.364 cpu : usr=0.03%, sys=1.62%, ctx=640, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.3% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.364 issued rwts: total=355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321472: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=38, BW=38.9MiB/s (40.8MB/s)(400MiB/10272msec) 00:25:52.364 slat (usec): min=69, max=211675, avg=25149.84, stdev=49090.43 00:25:52.364 clat (msec): min=209, max=4779, avg=3021.79, stdev=1154.98 00:25:52.364 lat (msec): min=370, max=4785, avg=3046.94, stdev=1156.55 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 426], 5.00th=[ 927], 10.00th=[ 1569], 20.00th=[ 2265], 00:25:52.364 | 30.00th=[ 2400], 40.00th=[ 2534], 50.00th=[ 2668], 60.00th=[ 3171], 00:25:52.364 | 70.00th=[ 3977], 80.00th=[ 4396], 90.00th=[ 4597], 95.00th=[ 4597], 00:25:52.364 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:52.364 | 99.99th=[ 4799] 00:25:52.364 bw ( KiB/s): min=14336, max=88064, per=0.83%, avg=34812.13, stdev=16889.25, samples=16 00:25:52.364 iops : min= 14, max= 86, avg=33.94, stdev=16.51, samples=16 00:25:52.364 lat (msec) : 250=0.25%, 500=1.50%, 750=1.50%, 1000=1.75%, 2000=10.00% 00:25:52.364 lat (msec) : >=2000=85.00% 00:25:52.364 cpu : usr=0.05%, sys=1.49%, ctx=670, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:52.364 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321473: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=51, BW=51.9MiB/s (54.4MB/s)(529MiB/10190msec) 00:25:52.364 slat (usec): min=43, max=191859, avg=18918.75, stdev=39196.14 00:25:52.364 clat (msec): min=179, max=3419, avg=2175.56, stdev=702.50 00:25:52.364 lat (msec): min=205, max=3424, avg=2194.47, stdev=702.38 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 296], 5.00th=[ 651], 10.00th=[ 1116], 20.00th=[ 1620], 00:25:52.364 | 30.00th=[ 1972], 40.00th=[ 2165], 50.00th=[ 2333], 60.00th=[ 2433], 00:25:52.364 | 70.00th=[ 2534], 80.00th=[ 2735], 90.00th=[ 2970], 95.00th=[ 3138], 00:25:52.364 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3406], 99.95th=[ 3406], 00:25:52.364 | 99.99th=[ 3406] 00:25:52.364 bw ( KiB/s): min=10240, max=94208, per=1.23%, avg=51445.00, stdev=22301.49, samples=16 00:25:52.364 iops : min= 10, max= 92, avg=50.19, stdev=21.69, samples=16 00:25:52.364 lat (msec) : 250=0.95%, 500=2.84%, 750=2.27%, 1000=2.84%, 2000=23.25% 00:25:52.364 lat (msec) : >=2000=67.86% 00:25:52.364 cpu : usr=0.01%, sys=1.34%, ctx=831, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 job5: (groupid=0, jobs=1): err= 0: pid=3321474: Wed Oct 9 02:07:10 2024 00:25:52.364 read: IOPS=52, BW=52.3MiB/s (54.8MB/s)(535MiB/10228msec) 00:25:52.364 slat (usec): min=33, max=336939, avg=18774.92, stdev=48500.29 00:25:52.364 clat (msec): min=181, max=3699, avg=2235.38, stdev=851.55 00:25:52.364 lat (msec): min=374, max=3721, avg=2254.15, stdev=854.85 00:25:52.364 clat percentiles (msec): 00:25:52.364 | 1.00th=[ 380], 5.00th=[ 600], 10.00th=[ 1028], 20.00th=[ 1670], 00:25:52.364 | 30.00th=[ 1770], 40.00th=[ 1955], 50.00th=[ 2165], 60.00th=[ 2467], 00:25:52.364 | 70.00th=[ 2937], 80.00th=[ 3138], 90.00th=[ 3272], 95.00th=[ 3440], 00:25:52.364 | 99.00th=[ 3641], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:25:52.364 | 99.99th=[ 3708] 00:25:52.364 bw ( KiB/s): min=26624, max=94208, per=1.25%, avg=52103.94, stdev=20661.00, samples=16 00:25:52.364 iops : min= 26, max= 92, avg=50.88, stdev=20.17, samples=16 00:25:52.364 lat (msec) : 250=0.19%, 500=2.80%, 750=2.99%, 1000=2.80%, 2000=37.57% 00:25:52.364 lat (msec) : >=2000=53.64% 00:25:52.364 cpu : usr=0.05%, sys=1.39%, ctx=708, majf=0, minf=32769 00:25:52.364 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:52.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:52.364 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.364 00:25:52.364 Run status group 0 (all jobs): 00:25:52.364 READ: bw=4077MiB/s (4275MB/s), 29.7MiB/s-94.4MiB/s (31.2MB/s-99.0MB/s), io=41.0GiB (44.0GB), run=10100-10298msec 00:25:52.364 00:25:52.364 Disk stats (read/write): 00:25:52.364 nvme0n1: ios=55502/0, merge=0/0, ticks=11105974/0, in_queue=11105974, util=98.21% 00:25:52.364 nvme1n1: ios=61679/0, merge=0/0, ticks=12182088/0, in_queue=12182088, util=98.53% 00:25:52.364 nvme2n1: ios=54764/0, merge=0/0, ticks=11817925/0, in_queue=11817925, util=98.50% 00:25:52.364 nvme3n1: ios=56907/0, merge=0/0, ticks=11253098/0, in_queue=11253098, util=98.72% 00:25:52.364 nvme4n1: ios=55264/0, merge=0/0, ticks=11379762/0, in_queue=11379762, util=98.93% 00:25:52.364 nvme5n1: ios=51551/0, merge=0/0, ticks=10249021/0, in_queue=10249021, util=99.16% 00:25:52.364 02:07:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:52.364 02:07:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:52.364 02:07:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:52.364 02:07:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:52.364 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:52.364 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:53.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:53.350 02:07:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:54.297 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:54.298 02:07:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:55.233 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:55.233 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.234 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:55.234 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.234 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:55.234 02:07:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:55.800 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:55.800 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:55.800 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:55.800 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:55.800 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:56.059 02:07:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:56.992 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:56.992 rmmod nvme_rdma 00:25:56.992 rmmod nvme_fabrics 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.992 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@515 -- # '[' -n 3320690 ']' 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # killprocess 3320690 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 3320690 ']' 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 3320690 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320690 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320690' 00:25:56.993 killing process with pid 3320690 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 3320690 00:25:56.993 02:07:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 3320690 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:25:59.523 00:25:59.523 real 0m29.361s 00:25:59.523 user 1m32.874s 00:25:59.523 sys 0m19.372s 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:59.523 ************************************ 00:25:59.523 END TEST nvmf_srq_overwhelm 00:25:59.523 ************************************ 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:59.523 ************************************ 00:25:59.523 START TEST nvmf_shutdown 00:25:59.523 ************************************ 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:59.523 * Looking for test storage... 00:25:59.523 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:59.523 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.524 --rc genhtml_branch_coverage=1 00:25:59.524 --rc genhtml_function_coverage=1 00:25:59.524 --rc genhtml_legend=1 00:25:59.524 --rc geninfo_all_blocks=1 00:25:59.524 --rc geninfo_unexecuted_blocks=1 00:25:59.524 00:25:59.524 ' 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.524 --rc genhtml_branch_coverage=1 00:25:59.524 --rc genhtml_function_coverage=1 00:25:59.524 --rc genhtml_legend=1 00:25:59.524 --rc geninfo_all_blocks=1 00:25:59.524 --rc geninfo_unexecuted_blocks=1 00:25:59.524 00:25:59.524 ' 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.524 --rc genhtml_branch_coverage=1 00:25:59.524 --rc genhtml_function_coverage=1 00:25:59.524 --rc genhtml_legend=1 00:25:59.524 --rc geninfo_all_blocks=1 00:25:59.524 --rc geninfo_unexecuted_blocks=1 00:25:59.524 00:25:59.524 ' 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.524 --rc genhtml_branch_coverage=1 00:25:59.524 --rc genhtml_function_coverage=1 00:25:59.524 --rc genhtml_legend=1 00:25:59.524 --rc geninfo_all_blocks=1 00:25:59.524 --rc geninfo_unexecuted_blocks=1 00:25:59.524 00:25:59.524 ' 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.524 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.783 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:59.783 ************************************ 00:25:59.783 START TEST nvmf_shutdown_tc1 00:25:59.783 ************************************ 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.783 02:07:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:26:06.337 Found 0000:18:00.0 (0x8086 - 0x159b) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:26:06.337 Found 0000:18:00.1 (0x8086 - 0x159b) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@403 -- # modinfo irdma 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:26:06.337 Found net devices under 0000:18:00.0: cvl_0_0 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:26:06.337 Found net devices under 0000:18:00.1: cvl_0_1 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # rdma_device_init 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:06.337 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:06.338 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:06.338 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:26:06.338 altname enp24s0f0np0 00:26:06.338 altname ens785f0np0 00:26:06.338 inet 192.168.100.8/24 scope global cvl_0_0 00:26:06.338 valid_lft forever preferred_lft forever 00:26:06.338 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:26:06.338 valid_lft forever preferred_lft forever 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:06.338 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:06.338 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:26:06.338 altname enp24s0f1np1 00:26:06.338 altname ens785f1np1 00:26:06.338 inet 192.168.100.9/24 scope global cvl_0_1 00:26:06.338 valid_lft forever preferred_lft forever 00:26:06.338 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:26:06.338 valid_lft forever preferred_lft forever 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:26:06.338 192.168.100.9' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:26:06.338 192.168.100.9' 00:26:06.338 02:07:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # head -n 1 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:26:06.338 192.168.100.9' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # tail -n +2 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # head -n 1 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3326350 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3326350 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3326350 ']' 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.338 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:06.339 [2024-10-09 02:07:26.145490] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:06.339 [2024-10-09 02:07:26.145619] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.596 [2024-10-09 02:07:26.275273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.853 [2024-10-09 02:07:26.476306] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.853 [2024-10-09 02:07:26.476360] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.853 [2024-10-09 02:07:26.476376] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.853 [2024-10-09 02:07:26.476394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.853 [2024-10-09 02:07:26.476405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.853 [2024-10-09 02:07:26.478814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.853 [2024-10-09 02:07:26.478837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.853 [2024-10-09 02:07:26.478875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.853 [2024-10-09 02:07:26.478896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:07.418 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.418 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:07.418 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:07.418 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:07.418 02:07:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:07.418 [2024-10-09 02:07:27.025414] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029440/0x617000007c40) succeed. 00:26:07.418 [2024-10-09 02:07:27.035097] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:26:07.418 [2024-10-09 02:07:27.035133] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:07.418 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.419 02:07:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 Malloc1 00:26:07.419 [2024-10-09 02:07:27.205559] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:07.676 Malloc2 00:26:07.676 Malloc3 00:26:07.676 Malloc4 00:26:07.933 Malloc5 00:26:07.933 Malloc6 00:26:08.190 Malloc7 00:26:08.190 Malloc8 00:26:08.190 Malloc9 00:26:08.449 Malloc10 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3326741 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3326741 /var/tmp/bdevperf.sock 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3326741 ']' 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.449 "trtype": "$TEST_TRANSPORT", 00:26:08.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.449 "adrfam": "ipv4", 00:26:08.449 "trsvcid": "$NVMF_PORT", 00:26:08.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.449 "hdgst": ${hdgst:-false}, 00:26:08.449 "ddgst": ${ddgst:-false} 00:26:08.449 }, 00:26:08.449 "method": "bdev_nvme_attach_controller" 00:26:08.449 } 00:26:08.449 EOF 00:26:08.449 )") 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.449 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.449 { 00:26:08.449 "params": { 00:26:08.449 "name": "Nvme$subsystem", 00:26:08.450 "trtype": "$TEST_TRANSPORT", 00:26:08.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "$NVMF_PORT", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.450 "hdgst": ${hdgst:-false}, 00:26:08.450 "ddgst": ${ddgst:-false} 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 } 00:26:08.450 EOF 00:26:08.450 )") 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:08.450 { 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme$subsystem", 00:26:08.450 "trtype": "$TEST_TRANSPORT", 00:26:08.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "$NVMF_PORT", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.450 "hdgst": ${hdgst:-false}, 00:26:08.450 "ddgst": ${ddgst:-false} 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 } 00:26:08.450 EOF 00:26:08.450 )") 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:08.450 [2024-10-09 02:07:28.205785] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:08.450 [2024-10-09 02:07:28.205879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:08.450 02:07:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme1", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme2", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme3", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme4", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme5", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme6", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme7", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme8", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme9", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 },{ 00:26:08.450 "params": { 00:26:08.450 "name": "Nvme10", 00:26:08.450 "trtype": "rdma", 00:26:08.450 "traddr": "192.168.100.8", 00:26:08.450 "adrfam": "ipv4", 00:26:08.450 "trsvcid": "4420", 00:26:08.450 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:08.450 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:08.450 "hdgst": false, 00:26:08.450 "ddgst": false 00:26:08.450 }, 00:26:08.450 "method": "bdev_nvme_attach_controller" 00:26:08.450 }' 00:26:08.707 [2024-10-09 02:07:28.337348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.965 [2024-10-09 02:07:28.540189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3326741 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:09.897 02:07:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:11.272 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3326741 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3326350 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.272 } 00:26:11.272 EOF 00:26:11.272 )") 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.272 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.272 { 00:26:11.272 "params": { 00:26:11.272 "name": "Nvme$subsystem", 00:26:11.272 "trtype": "$TEST_TRANSPORT", 00:26:11.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.272 "adrfam": "ipv4", 00:26:11.272 "trsvcid": "$NVMF_PORT", 00:26:11.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.272 "hdgst": ${hdgst:-false}, 00:26:11.272 "ddgst": ${ddgst:-false} 00:26:11.272 }, 00:26:11.272 "method": "bdev_nvme_attach_controller" 00:26:11.273 } 00:26:11.273 EOF 00:26:11.273 )") 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.273 { 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme$subsystem", 00:26:11.273 "trtype": "$TEST_TRANSPORT", 00:26:11.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "$NVMF_PORT", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.273 "hdgst": ${hdgst:-false}, 00:26:11.273 "ddgst": ${ddgst:-false} 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 } 00:26:11.273 EOF 00:26:11.273 )") 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.273 { 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme$subsystem", 00:26:11.273 "trtype": "$TEST_TRANSPORT", 00:26:11.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "$NVMF_PORT", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.273 "hdgst": ${hdgst:-false}, 00:26:11.273 "ddgst": ${ddgst:-false} 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 } 00:26:11.273 EOF 00:26:11.273 )") 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:11.273 { 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme$subsystem", 00:26:11.273 "trtype": "$TEST_TRANSPORT", 00:26:11.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "$NVMF_PORT", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.273 "hdgst": ${hdgst:-false}, 00:26:11.273 "ddgst": ${ddgst:-false} 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 } 00:26:11.273 EOF 00:26:11.273 )") 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:11.273 [2024-10-09 02:07:30.775996] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:11.273 [2024-10-09 02:07:30.776094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327118 ] 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:11.273 02:07:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme1", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme2", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme3", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme4", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme5", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme6", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme7", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme8", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme9", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 },{ 00:26:11.273 "params": { 00:26:11.273 "name": "Nvme10", 00:26:11.273 "trtype": "rdma", 00:26:11.273 "traddr": "192.168.100.8", 00:26:11.273 "adrfam": "ipv4", 00:26:11.273 "trsvcid": "4420", 00:26:11.273 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:11.273 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:11.273 "hdgst": false, 00:26:11.273 "ddgst": false 00:26:11.273 }, 00:26:11.273 "method": "bdev_nvme_attach_controller" 00:26:11.273 }' 00:26:11.273 [2024-10-09 02:07:30.905099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.530 [2024-10-09 02:07:31.118230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.462 Running I/O for 1 seconds... 00:26:13.834 3139.00 IOPS, 196.19 MiB/s 00:26:13.834 Latency(us) 00:26:13.834 [2024-10-09T00:07:33.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme1n1 : 1.20 319.44 19.97 0.00 0.00 196970.11 72944.42 181449.24 00:26:13.834 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme2n1 : 1.20 318.86 19.93 0.00 0.00 194590.87 62002.75 166860.35 00:26:13.834 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme3n1 : 1.22 367.92 22.99 0.00 0.00 166237.91 7066.49 151359.67 00:26:13.834 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme4n1 : 1.22 367.27 22.95 0.00 0.00 164271.99 12423.35 133123.56 00:26:13.834 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme5n1 : 1.22 343.88 21.49 0.00 0.00 172099.84 13050.21 155918.69 00:26:13.834 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme6n1 : 1.22 316.66 19.79 0.00 0.00 183171.76 12651.30 150447.86 00:26:13.834 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme7n1 : 1.22 350.54 21.91 0.00 0.00 164096.58 12879.25 141329.81 00:26:13.834 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme8n1 : 1.23 341.96 21.37 0.00 0.00 165402.13 13278.16 130388.15 00:26:13.834 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme9n1 : 1.21 316.52 19.78 0.00 0.00 176581.08 14360.93 124005.51 00:26:13.834 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.834 Verification LBA range: start 0x0 length 0x400 00:26:13.834 Nvme10n1 : 1.22 263.30 16.46 0.00 0.00 208731.98 14303.94 275365.18 00:26:13.834 [2024-10-09T00:07:33.654Z] =================================================================================================================== 00:26:13.834 [2024-10-09T00:07:33.654Z] Total : 3306.35 206.65 0.00 0.00 177951.67 7066.49 275365.18 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:15.209 rmmod nvme_rdma 00:26:15.209 rmmod nvme_fabrics 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3326350 ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3326350 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3326350 ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3326350 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3326350 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3326350' 00:26:15.209 killing process with pid 3326350 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3326350 00:26:15.209 02:07:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3326350 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:26:18.487 00:26:18.487 real 0m18.619s 00:26:18.487 user 0m51.251s 00:26:18.487 sys 0m7.036s 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 ************************************ 00:26:18.487 END TEST nvmf_shutdown_tc1 00:26:18.487 ************************************ 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 ************************************ 00:26:18.487 START TEST nvmf_shutdown_tc2 00:26:18.487 ************************************ 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.487 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:26:18.488 Found 0000:18:00.0 (0x8086 - 0x159b) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:26:18.488 Found 0000:18:00.1 (0x8086 - 0x159b) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@403 -- # modinfo irdma 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:26:18.488 Found net devices under 0000:18:00.0: cvl_0_0 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:26:18.488 Found net devices under 0000:18:00.1: cvl_0_1 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # rdma_device_init 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:18.488 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:18.489 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:18.489 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:26:18.489 altname enp24s0f0np0 00:26:18.489 altname ens785f0np0 00:26:18.489 inet 192.168.100.8/24 scope global cvl_0_0 00:26:18.489 valid_lft forever preferred_lft forever 00:26:18.489 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:26:18.489 valid_lft forever preferred_lft forever 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:18.489 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:18.489 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:26:18.489 altname enp24s0f1np1 00:26:18.489 altname ens785f1np1 00:26:18.489 inet 192.168.100.9/24 scope global cvl_0_1 00:26:18.489 valid_lft forever preferred_lft forever 00:26:18.489 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:26:18.489 valid_lft forever preferred_lft forever 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:26:18.489 192.168.100.9' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:26:18.489 192.168.100.9' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # head -n 1 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:26:18.489 192.168.100.9' 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # tail -n +2 00:26:18.489 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # head -n 1 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3328110 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3328110 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3328110 ']' 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.747 02:07:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:18.747 [2024-10-09 02:07:38.421085] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:18.747 [2024-10-09 02:07:38.421191] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.747 [2024-10-09 02:07:38.553941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.005 [2024-10-09 02:07:38.750400] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.005 [2024-10-09 02:07:38.750458] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.005 [2024-10-09 02:07:38.750471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.005 [2024-10-09 02:07:38.750485] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.005 [2024-10-09 02:07:38.750495] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.005 [2024-10-09 02:07:38.752877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.005 [2024-10-09 02:07:38.752940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.005 [2024-10-09 02:07:38.753019] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.005 [2024-10-09 02:07:38.753042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.571 [2024-10-09 02:07:39.299844] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029440/0x617000007c40) succeed. 00:26:19.571 [2024-10-09 02:07:39.309634] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:26:19.571 [2024-10-09 02:07:39.309669] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.571 02:07:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.828 Malloc1 00:26:19.828 [2024-10-09 02:07:39.469725] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:19.828 Malloc2 00:26:19.828 Malloc3 00:26:20.086 Malloc4 00:26:20.086 Malloc5 00:26:20.343 Malloc6 00:26:20.343 Malloc7 00:26:20.343 Malloc8 00:26:20.601 Malloc9 00:26:20.601 Malloc10 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3328490 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3328490 /var/tmp/bdevperf.sock 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3328490 ']' 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.601 { 00:26:20.601 "params": { 00:26:20.601 "name": "Nvme$subsystem", 00:26:20.601 "trtype": "$TEST_TRANSPORT", 00:26:20.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.601 "adrfam": "ipv4", 00:26:20.601 "trsvcid": "$NVMF_PORT", 00:26:20.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.601 "hdgst": ${hdgst:-false}, 00:26:20.601 "ddgst": ${ddgst:-false} 00:26:20.601 }, 00:26:20.601 "method": "bdev_nvme_attach_controller" 00:26:20.601 } 00:26:20.601 EOF 00:26:20.601 )") 00:26:20.601 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.859 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.859 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.859 { 00:26:20.859 "params": { 00:26:20.859 "name": "Nvme$subsystem", 00:26:20.859 "trtype": "$TEST_TRANSPORT", 00:26:20.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "$NVMF_PORT", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.860 "hdgst": ${hdgst:-false}, 00:26:20.860 "ddgst": ${ddgst:-false} 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 } 00:26:20.860 EOF 00:26:20.860 )") 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.860 { 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme$subsystem", 00:26:20.860 "trtype": "$TEST_TRANSPORT", 00:26:20.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "$NVMF_PORT", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.860 "hdgst": ${hdgst:-false}, 00:26:20.860 "ddgst": ${ddgst:-false} 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 } 00:26:20.860 EOF 00:26:20.860 )") 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.860 { 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme$subsystem", 00:26:20.860 "trtype": "$TEST_TRANSPORT", 00:26:20.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "$NVMF_PORT", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.860 "hdgst": ${hdgst:-false}, 00:26:20.860 "ddgst": ${ddgst:-false} 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 } 00:26:20.860 EOF 00:26:20.860 )") 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:20.860 { 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme$subsystem", 00:26:20.860 "trtype": "$TEST_TRANSPORT", 00:26:20.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "$NVMF_PORT", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.860 "hdgst": ${hdgst:-false}, 00:26:20.860 "ddgst": ${ddgst:-false} 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 } 00:26:20.860 EOF 00:26:20.860 )") 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:26:20.860 [2024-10-09 02:07:40.455413] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:20.860 [2024-10-09 02:07:40.455510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328490 ] 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:26:20.860 02:07:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme1", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme2", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme3", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme4", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme5", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme6", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme7", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme8", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme9", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 },{ 00:26:20.860 "params": { 00:26:20.860 "name": "Nvme10", 00:26:20.860 "trtype": "rdma", 00:26:20.860 "traddr": "192.168.100.8", 00:26:20.860 "adrfam": "ipv4", 00:26:20.860 "trsvcid": "4420", 00:26:20.860 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:20.860 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:20.860 "hdgst": false, 00:26:20.860 "ddgst": false 00:26:20.860 }, 00:26:20.860 "method": "bdev_nvme_attach_controller" 00:26:20.860 }' 00:26:20.860 [2024-10-09 02:07:40.584957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.118 [2024-10-09 02:07:40.788296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.489 Running I/O for 10 seconds... 00:26:22.489 02:07:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.489 02:07:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:22.489 02:07:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:22.489 02:07:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.489 02:07:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:22.489 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:22.746 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:22.746 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:22.746 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:22.746 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:22.747 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.747 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3328490 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3328490 ']' 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3328490 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3328490 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3328490' 00:26:23.006 killing process with pid 3328490 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3328490 00:26:23.006 02:07:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3328490 00:26:23.006 Received shutdown signal, test time was about 0.851464 seconds 00:26:23.006 00:26:23.006 Latency(us) 00:26:23.006 [2024-10-09T00:07:42.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme1n1 : 0.83 309.56 19.35 0.00 0.00 201843.76 68841.29 185096.46 00:26:23.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme2n1 : 0.84 380.24 23.77 0.00 0.00 161042.70 9972.87 170507.58 00:26:23.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme3n1 : 0.84 379.27 23.70 0.00 0.00 158191.44 13620.09 153183.28 00:26:23.006 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme4n1 : 0.85 378.31 23.64 0.00 0.00 155339.91 14930.81 137682.59 00:26:23.006 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme5n1 : 0.85 331.69 20.73 0.00 0.00 171354.08 15272.74 176890.21 00:26:23.006 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme6n1 : 0.85 339.39 21.21 0.00 0.00 164209.63 15386.71 167772.16 00:26:23.006 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme7n1 : 0.85 322.47 20.15 0.00 0.00 168021.33 14702.86 160477.72 00:26:23.006 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme8n1 : 0.85 318.55 19.91 0.00 0.00 166129.92 11283.59 149536.06 00:26:23.006 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme9n1 : 0.84 305.84 19.12 0.00 0.00 170987.52 11967.44 136770.78 00:26:23.006 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.006 Verification LBA range: start 0x0 length 0x400 00:26:23.006 Nvme10n1 : 0.84 228.79 14.30 0.00 0.00 222693.58 13278.16 288130.45 00:26:23.007 [2024-10-09T00:07:42.827Z] =================================================================================================================== 00:26:23.007 [2024-10-09T00:07:42.827Z] Total : 3294.12 205.88 0.00 0.00 171550.61 9972.87 288130.45 00:26:24.377 02:07:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3328110 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:25.309 rmmod nvme_rdma 00:26:25.309 rmmod nvme_fabrics 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3328110 ']' 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3328110 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3328110 ']' 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3328110 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.309 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3328110 00:26:25.566 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:25.567 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:25.567 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3328110' 00:26:25.567 killing process with pid 3328110 00:26:25.567 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3328110 00:26:25.567 02:07:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3328110 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:26:28.844 00:26:28.844 real 0m10.223s 00:26:28.844 user 0m39.924s 00:26:28.844 sys 0m1.685s 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.844 ************************************ 00:26:28.844 END TEST nvmf_shutdown_tc2 00:26:28.844 ************************************ 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:28.844 ************************************ 00:26:28.844 START TEST nvmf_shutdown_tc3 00:26:28.844 ************************************ 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:26:28.844 Found 0000:18:00.0 (0x8086 - 0x159b) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:26:28.844 Found 0000:18:00.1 (0x8086 - 0x159b) 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.844 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@403 -- # modinfo irdma 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:26:28.845 Found net devices under 0000:18:00.0: cvl_0_0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:26:28.845 Found net devices under 0000:18:00.1: cvl_0_1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # rdma_device_init 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:28.845 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:28.845 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:26:28.845 altname enp24s0f0np0 00:26:28.845 altname ens785f0np0 00:26:28.845 inet 192.168.100.8/24 scope global cvl_0_0 00:26:28.845 valid_lft forever preferred_lft forever 00:26:28.845 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:26:28.845 valid_lft forever preferred_lft forever 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:28.845 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:28.845 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:26:28.845 altname enp24s0f1np1 00:26:28.845 altname ens785f1np1 00:26:28.845 inet 192.168.100.9/24 scope global cvl_0_1 00:26:28.845 valid_lft forever preferred_lft forever 00:26:28.845 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:26:28.845 valid_lft forever preferred_lft forever 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:28.845 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:26:28.846 192.168.100.9' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:26:28.846 192.168.100.9' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # head -n 1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # tail -n +2 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # head -n 1 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:26:28.846 192.168.100.9' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3329645 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3329645 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3329645 ']' 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.846 02:07:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 [2024-10-09 02:07:48.707806] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:29.104 [2024-10-09 02:07:48.707906] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.104 [2024-10-09 02:07:48.835564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.361 [2024-10-09 02:07:49.023006] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.361 [2024-10-09 02:07:49.023060] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.361 [2024-10-09 02:07:49.023072] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.361 [2024-10-09 02:07:49.023086] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.361 [2024-10-09 02:07:49.023096] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.361 [2024-10-09 02:07:49.025441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.361 [2024-10-09 02:07:49.025506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.361 [2024-10-09 02:07:49.025612] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.361 [2024-10-09 02:07:49.025635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.926 [2024-10-09 02:07:49.586249] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029440/0x617000007c40) succeed. 00:26:29.926 [2024-10-09 02:07:49.596013] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:26:29.926 [2024-10-09 02:07:49.596049] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.926 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.927 02:07:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.927 Malloc1 00:26:30.184 [2024-10-09 02:07:49.763213] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:30.184 Malloc2 00:26:30.184 Malloc3 00:26:30.441 Malloc4 00:26:30.441 Malloc5 00:26:30.441 Malloc6 00:26:30.699 Malloc7 00:26:30.699 Malloc8 00:26:30.699 Malloc9 00:26:30.955 Malloc10 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3329905 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3329905 /var/tmp/bdevperf.sock 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3329905 ']' 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.955 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.955 { 00:26:30.955 "params": { 00:26:30.955 "name": "Nvme$subsystem", 00:26:30.955 "trtype": "$TEST_TRANSPORT", 00:26:30.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.955 "adrfam": "ipv4", 00:26:30.955 "trsvcid": "$NVMF_PORT", 00:26:30.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.955 "hdgst": ${hdgst:-false}, 00:26:30.955 "ddgst": ${ddgst:-false} 00:26:30.955 }, 00:26:30.955 "method": "bdev_nvme_attach_controller" 00:26:30.955 } 00:26:30.955 EOF 00:26:30.955 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.956 { 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme$subsystem", 00:26:30.956 "trtype": "$TEST_TRANSPORT", 00:26:30.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "$NVMF_PORT", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.956 "hdgst": ${hdgst:-false}, 00:26:30.956 "ddgst": ${ddgst:-false} 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 } 00:26:30.956 EOF 00:26:30.956 )") 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:26:30.956 02:07:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme1", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme2", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme3", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme4", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme5", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme6", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme7", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme8", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme9", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 },{ 00:26:30.956 "params": { 00:26:30.956 "name": "Nvme10", 00:26:30.956 "trtype": "rdma", 00:26:30.956 "traddr": "192.168.100.8", 00:26:30.956 "adrfam": "ipv4", 00:26:30.956 "trsvcid": "4420", 00:26:30.956 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:30.956 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:30.956 "hdgst": false, 00:26:30.956 "ddgst": false 00:26:30.956 }, 00:26:30.956 "method": "bdev_nvme_attach_controller" 00:26:30.956 }' 00:26:30.956 [2024-10-09 02:07:50.756079] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:30.956 [2024-10-09 02:07:50.756173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329905 ] 00:26:31.213 [2024-10-09 02:07:50.888102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.469 [2024-10-09 02:07:51.087585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.400 Running I/O for 10 seconds... 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.658 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:32.916 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.916 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:32.916 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:32.916 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.173 02:07:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3329645 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3329645 ']' 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3329645 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.430 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3329645 00:26:33.431 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:33.431 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:33.431 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3329645' 00:26:33.431 killing process with pid 3329645 00:26:33.431 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3329645 00:26:33.431 02:07:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3329645 00:26:33.999 2642.00 IOPS, 165.12 MiB/s [2024-10-09T00:07:53.819Z] [2024-10-09 02:07:53.687662] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.688424] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802740 was disconnected and freed. reset controller. 00:26:33.999 [2024-10-09 02:07:53.688467] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.688984] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802440 was disconnected and freed. reset controller. 00:26:33.999 [2024-10-09 02:07:53.689012] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.689510] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802140 was disconnected and freed. reset controller. 00:26:33.999 [2024-10-09 02:07:53.689536] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.690056] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801e40 was disconnected and freed. reset controller. 00:26:33.999 [2024-10-09 02:07:53.690081] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.690602] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801b40 was disconnected and freed. reset controller. 00:26:33.999 [2024-10-09 02:07:53.690628] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:33.999 [2024-10-09 02:07:53.690651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bcdfcc0 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bccfc00 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bcbfb40 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bcafa80 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc9f9c0 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc8f900 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc7f840 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc6f780 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc5f6c0 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc4f600 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.690974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc3f540 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.690986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.691001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc2f480 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.691014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.691029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc1f3c0 len:0x10000 key:0x2e952b29 00:26:33.999 [2024-10-09 02:07:53.691041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.999 [2024-10-09 02:07:53.691055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bc0f300 len:0x10000 key:0x2e952b29 00:26:34.000 [2024-10-09 02:07:53.691068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bfeffc0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bfdff00 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bfcfe40 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bfbfd80 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bfafcc0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf9fc00 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf8fb40 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf7fa80 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf6f9c0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf5f900 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf4f840 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf3f780 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf2f6c0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf1f600 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bf0f540 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001beff480 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001beef3c0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bedf300 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001becf240 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bebf180 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001beaf0c0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be9f000 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be8ef40 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be7ee80 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be6edc0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be5ed00 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be4ec40 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be3eb80 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be2eac0 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be1ea00 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001be0e940 len:0x10000 key:0xda594e10 00:26:34.000 [2024-10-09 02:07:53.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c1effc0 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.691951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c1dff00 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.691979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.691994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c1cfe40 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.692007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.692021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c1bfd80 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.692034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.692048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c1afcc0 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.692062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.692080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c19fc00 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.692093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.000 [2024-10-09 02:07:53.692107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c18fb40 len:0x10000 key:0xbef53b16 00:26:34.000 [2024-10-09 02:07:53.692120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c17fa80 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c16f9c0 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c15f900 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c14f840 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c13f780 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c12f6c0 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c11f600 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c10f540 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c0ff480 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c0ef3c0 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001c0df300 len:0x10000 key:0xbef53b16 00:26:34.001 [2024-10-09 02:07:53.692413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.692427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001bcefd80 len:0x10000 key:0x2e952b29 00:26:34.001 [2024-10-09 02:07:53.692439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.696533] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801840 was disconnected and freed. reset controller. 00:26:34.001 [2024-10-09 02:07:53.696625] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.001 [2024-10-09 02:07:53.696649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.696665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.696683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.696697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.696710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.696723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.696736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.696749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.001 [2024-10-09 02:07:53.697093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:34.001 [2024-10-09 02:07:53.697107] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.001 [2024-10-09 02:07:53.697127] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.001 [2024-10-09 02:07:53.697143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.001 [2024-10-09 02:07:53.697508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:34.001 [2024-10-09 02:07:53.697521] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.001 [2024-10-09 02:07:53.697562] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.001 [2024-10-09 02:07:53.697578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.697670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.697938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.001 [2024-10-09 02:07:53.697954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:34.001 [2024-10-09 02:07:53.697973] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.001 [2024-10-09 02:07:53.697990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.001 [2024-10-09 02:07:53.698348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:34.001 [2024-10-09 02:07:53.698369] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.001 [2024-10-09 02:07:53.698384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.001 [2024-10-09 02:07:53.698475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.001 [2024-10-09 02:07:53.698714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.001 [2024-10-09 02:07:53.698730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.001 [2024-10-09 02:07:53.698751] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.698767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.698780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.698793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.698805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.698818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.698843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.698855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.002 [2024-10-09 02:07:53.699123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:34.002 [2024-10-09 02:07:53.699143] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.699157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.002 [2024-10-09 02:07:53.699522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:34.002 [2024-10-09 02:07:53.699534] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.699566] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.699582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.002 [2024-10-09 02:07:53.699927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:34.002 [2024-10-09 02:07:53.699938] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.699956] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.699972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.699985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.699998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.002 [2024-10-09 02:07:53.700321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:34.002 [2024-10-09 02:07:53.700332] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.700348] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.700364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.002 [2024-10-09 02:07:53.700453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.002 [2024-10-09 02:07:53.700704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:34.002 [2024-10-09 02:07:53.700721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:34.002 [2024-10-09 02:07:53.700733] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.700751] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.701275] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013803340 was disconnected and freed. reset controller. 00:26:34.002 [2024-10-09 02:07:53.701292] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.701309] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.701819] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013803040 was disconnected and freed. reset controller. 00:26:34.002 [2024-10-09 02:07:53.701836] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.701856] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.702361] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802d40 was disconnected and freed. reset controller. 00:26:34.002 [2024-10-09 02:07:53.702378] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.702393] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:34.002 [2024-10-09 02:07:53.730505] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802a40 was disconnected and freed. reset controller. 00:26:34.002 [2024-10-09 02:07:53.730530] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731679] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731698] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731713] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731729] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731749] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731764] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.002 [2024-10-09 02:07:53.731832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:34.002 [2024-10-09 02:07:53.731914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:34.002 [2024-10-09 02:07:53.749692] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.002 [2024-10-09 02:07:53.749720] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.002 [2024-10-09 02:07:53.749733] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a1bf040 00:26:34.003 [2024-10-09 02:07:53.749757] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.749769] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.749779] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a1b2340 00:26:34.003 [2024-10-09 02:07:53.749796] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.749812] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.749822] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a1a55c0 00:26:34.003 [2024-10-09 02:07:53.749839] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.749851] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.749861] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a194b00 00:26:34.003 [2024-10-09 02:07:53.749914] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.749928] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.749940] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a1c8040 00:26:34.003 [2024-10-09 02:07:53.749961] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.749974] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.749984] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a1d4b00 00:26:34.003 [2024-10-09 02:07:53.750005] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.750018] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.750028] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff100 00:26:34.003 [2024-10-09 02:07:53.750047] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.750059] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.750068] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff7c0 00:26:34.003 [2024-10-09 02:07:53.750090] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.750103] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.750112] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a17b040 00:26:34.003 [2024-10-09 02:07:53.750131] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:34.003 [2024-10-09 02:07:53.750145] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:34.003 [2024-10-09 02:07:53.750154] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001a18b040 00:26:34.567 task offset: 24576 on job bdev=Nvme10n1 fails 00:26:34.568 inf IOPS, inf MiB/s 00:26:34.568 Latency(us) 00:26:34.568 [2024-10-09T00:07:54.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme1n1 ended in about 2.04 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme1n1 : 2.04 125.32 7.83 31.33 0.00 398321.40 70664.90 1013927.40 00:26:34.568 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme2n1 ended in about 2.04 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme2n1 : 2.04 125.28 7.83 31.32 0.00 395006.84 65194.07 1013927.40 00:26:34.568 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme3n1 ended in about 2.04 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme3n1 : 2.04 156.55 9.78 31.31 0.00 326430.16 6468.12 1013927.40 00:26:34.568 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme4n1 ended in about 2.04 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme4n1 : 2.04 156.50 9.78 31.30 0.00 323746.36 27582.11 1013927.40 00:26:34.568 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme5n1 ended in about 2.05 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme5n1 : 2.05 140.80 8.80 31.29 0.00 350210.91 38067.87 1006632.96 00:26:34.568 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme6n1 ended in about 2.05 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme6n1 : 2.05 137.33 8.58 31.28 0.00 354117.07 46730.02 1006632.96 00:26:34.568 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme7n1 ended in about 2.05 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme7n1 : 2.05 140.71 8.79 31.27 0.00 344117.10 57215.78 1006632.96 00:26:34.568 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme8n1 ended in about 2.05 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme8n1 : 2.05 133.33 8.33 31.26 0.00 356346.74 62914.56 1006632.96 00:26:34.568 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme9n1 ended in about 2.05 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme9n1 : 2.05 124.99 7.81 31.25 0.00 371815.65 63370.46 1006632.96 00:26:34.568 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.568 Job: Nvme10n1 ended in about 1.51 seconds with error 00:26:34.568 Verification LBA range: start 0x0 length 0x400 00:26:34.568 Nvme10n1 : 1.51 126.90 7.93 42.30 0.00 335261.16 70664.90 601791.44 00:26:34.568 [2024-10-09T00:07:54.388Z] =================================================================================================================== 00:26:34.568 [2024-10-09T00:07:54.388Z] Total : 1367.71 85.48 323.90 0.00 354600.41 6468.12 1013927.40 00:26:34.568 [2024-10-09 02:07:54.355579] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:35.133 [2024-10-09 02:07:54.752680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.752734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:35.133 [2024-10-09 02:07:54.752970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.752989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:35.133 [2024-10-09 02:07:54.753216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.753234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:35.133 [2024-10-09 02:07:54.753455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.753471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:35.133 [2024-10-09 02:07:54.753728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.753749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:35.133 [2024-10-09 02:07:54.753965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.753981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:35.133 [2024-10-09 02:07:54.754210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.754227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.133 [2024-10-09 02:07:54.754427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.754444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:35.133 [2024-10-09 02:07:54.754691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.754707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:35.133 [2024-10-09 02:07:54.754918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:35.133 [2024-10-09 02:07:54.754935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:35.133 [2024-10-09 02:07:54.754946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.754960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.754973] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755024] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755061] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755100] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.133 [2024-10-09 02:07:54.755155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.133 [2024-10-09 02:07:54.755169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.133 [2024-10-09 02:07:54.755182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.133 [2024-10-09 02:07:54.755199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755221] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755262] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755299] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755338] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755375] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:26:35.133 [2024-10-09 02:07:54.755391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:35.133 [2024-10-09 02:07:54.755402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:35.133 [2024-10-09 02:07:54.755413] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:26:35.134 [2024-10-09 02:07:54.755501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.134 [2024-10-09 02:07:54.755517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.134 [2024-10-09 02:07:54.755529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.134 [2024-10-09 02:07:54.755547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.134 [2024-10-09 02:07:54.755574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.134 [2024-10-09 02:07:54.755587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.507 02:07:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:37.884 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3329905 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3329905 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3329905 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:37.885 rmmod nvme_rdma 00:26:37.885 rmmod nvme_fabrics 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3329645 ']' 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3329645 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3329645 ']' 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3329645 00:26:37.885 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3329645) - No such process 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3329645 is not found' 00:26:37.885 Process with pid 3329645 is not found 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:26:37.885 00:26:37.885 real 0m9.009s 00:26:37.885 user 0m31.916s 00:26:37.885 sys 0m1.917s 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.885 ************************************ 00:26:37.885 END TEST nvmf_shutdown_tc3 00:26:37.885 ************************************ 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ rdma == \r\d\m\a ]] 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:37.885 00:26:37.885 real 0m38.299s 00:26:37.885 user 2m3.284s 00:26:37.885 sys 0m10.928s 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:37.885 ************************************ 00:26:37.885 END TEST nvmf_shutdown 00:26:37.885 ************************************ 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:37.885 00:26:37.885 real 14m55.281s 00:26:37.885 user 44m44.980s 00:26:37.885 sys 3m14.786s 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.885 02:07:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:37.885 ************************************ 00:26:37.885 END TEST nvmf_target_extra 00:26:37.885 ************************************ 00:26:37.885 02:07:57 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:37.885 02:07:57 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:37.885 02:07:57 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.885 02:07:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:37.885 ************************************ 00:26:37.885 START TEST nvmf_host 00:26:37.885 ************************************ 00:26:37.885 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:37.885 * Looking for test storage... 00:26:37.885 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:26:37.885 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:37.885 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:37.885 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.144 --rc genhtml_branch_coverage=1 00:26:38.144 --rc genhtml_function_coverage=1 00:26:38.144 --rc genhtml_legend=1 00:26:38.144 --rc geninfo_all_blocks=1 00:26:38.144 --rc geninfo_unexecuted_blocks=1 00:26:38.144 00:26:38.144 ' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.144 --rc genhtml_branch_coverage=1 00:26:38.144 --rc genhtml_function_coverage=1 00:26:38.144 --rc genhtml_legend=1 00:26:38.144 --rc geninfo_all_blocks=1 00:26:38.144 --rc geninfo_unexecuted_blocks=1 00:26:38.144 00:26:38.144 ' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.144 --rc genhtml_branch_coverage=1 00:26:38.144 --rc genhtml_function_coverage=1 00:26:38.144 --rc genhtml_legend=1 00:26:38.144 --rc geninfo_all_blocks=1 00:26:38.144 --rc geninfo_unexecuted_blocks=1 00:26:38.144 00:26:38.144 ' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.144 --rc genhtml_branch_coverage=1 00:26:38.144 --rc genhtml_function_coverage=1 00:26:38.144 --rc genhtml_legend=1 00:26:38.144 --rc geninfo_all_blocks=1 00:26:38.144 --rc geninfo_unexecuted_blocks=1 00:26:38.144 00:26:38.144 ' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.144 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.144 ************************************ 00:26:38.144 START TEST nvmf_multicontroller 00:26:38.144 ************************************ 00:26:38.144 02:07:57 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:38.144 * Looking for test storage... 00:26:38.144 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:38.145 02:07:57 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:38.145 02:07:57 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:26:38.145 02:07:57 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:38.403 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.404 --rc genhtml_branch_coverage=1 00:26:38.404 --rc genhtml_function_coverage=1 00:26:38.404 --rc genhtml_legend=1 00:26:38.404 --rc geninfo_all_blocks=1 00:26:38.404 --rc geninfo_unexecuted_blocks=1 00:26:38.404 00:26:38.404 ' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.404 --rc genhtml_branch_coverage=1 00:26:38.404 --rc genhtml_function_coverage=1 00:26:38.404 --rc genhtml_legend=1 00:26:38.404 --rc geninfo_all_blocks=1 00:26:38.404 --rc geninfo_unexecuted_blocks=1 00:26:38.404 00:26:38.404 ' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.404 --rc genhtml_branch_coverage=1 00:26:38.404 --rc genhtml_function_coverage=1 00:26:38.404 --rc genhtml_legend=1 00:26:38.404 --rc geninfo_all_blocks=1 00:26:38.404 --rc geninfo_unexecuted_blocks=1 00:26:38.404 00:26:38.404 ' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.404 --rc genhtml_branch_coverage=1 00:26:38.404 --rc genhtml_function_coverage=1 00:26:38.404 --rc genhtml_legend=1 00:26:38.404 --rc geninfo_all_blocks=1 00:26:38.404 --rc geninfo_unexecuted_blocks=1 00:26:38.404 00:26:38.404 ' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.404 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:26:38.404 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:26:38.404 00:26:38.404 real 0m0.214s 00:26:38.404 user 0m0.105s 00:26:38.404 sys 0m0.123s 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 ************************************ 00:26:38.404 END TEST nvmf_multicontroller 00:26:38.404 ************************************ 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 ************************************ 00:26:38.404 START TEST nvmf_aer 00:26:38.404 ************************************ 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:38.404 * Looking for test storage... 00:26:38.404 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:26:38.404 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.663 --rc genhtml_branch_coverage=1 00:26:38.663 --rc genhtml_function_coverage=1 00:26:38.663 --rc genhtml_legend=1 00:26:38.663 --rc geninfo_all_blocks=1 00:26:38.663 --rc geninfo_unexecuted_blocks=1 00:26:38.663 00:26:38.663 ' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.663 --rc genhtml_branch_coverage=1 00:26:38.663 --rc genhtml_function_coverage=1 00:26:38.663 --rc genhtml_legend=1 00:26:38.663 --rc geninfo_all_blocks=1 00:26:38.663 --rc geninfo_unexecuted_blocks=1 00:26:38.663 00:26:38.663 ' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.663 --rc genhtml_branch_coverage=1 00:26:38.663 --rc genhtml_function_coverage=1 00:26:38.663 --rc genhtml_legend=1 00:26:38.663 --rc geninfo_all_blocks=1 00:26:38.663 --rc geninfo_unexecuted_blocks=1 00:26:38.663 00:26:38.663 ' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.663 --rc genhtml_branch_coverage=1 00:26:38.663 --rc genhtml_function_coverage=1 00:26:38.663 --rc genhtml_legend=1 00:26:38.663 --rc geninfo_all_blocks=1 00:26:38.663 --rc geninfo_unexecuted_blocks=1 00:26:38.663 00:26:38.663 ' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.663 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.664 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.664 02:07:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:26:45.259 Found 0000:18:00.0 (0x8086 - 0x159b) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:26:45.259 Found 0000:18:00.1 (0x8086 - 0x159b) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@403 -- # modinfo irdma 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:26:45.259 Found net devices under 0000:18:00.0: cvl_0_0 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:26:45.259 Found net devices under 0000:18:00.1: cvl_0_1 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # rdma_device_init 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@528 -- # allocate_nic_ips 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:45.259 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:45.260 02:08:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:45.260 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:45.260 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:26:45.260 altname enp24s0f0np0 00:26:45.260 altname ens785f0np0 00:26:45.260 inet 192.168.100.8/24 scope global cvl_0_0 00:26:45.260 valid_lft forever preferred_lft forever 00:26:45.260 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:26:45.260 valid_lft forever preferred_lft forever 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:45.260 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:45.260 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:26:45.260 altname enp24s0f1np1 00:26:45.260 altname ens785f1np1 00:26:45.260 inet 192.168.100.9/24 scope global cvl_0_1 00:26:45.260 valid_lft forever preferred_lft forever 00:26:45.260 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:26:45.260 valid_lft forever preferred_lft forever 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:45.260 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:26:45.581 192.168.100.9' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:26:45.581 192.168.100.9' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # head -n 1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:26:45.581 192.168.100.9' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # tail -n +2 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # head -n 1 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3334017 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3334017 00:26:45.581 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3334017 ']' 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.582 02:08:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:45.582 [2024-10-09 02:08:05.234013] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:45.582 [2024-10-09 02:08:05.234137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.582 [2024-10-09 02:08:05.365298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.840 [2024-10-09 02:08:05.563484] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.840 [2024-10-09 02:08:05.563558] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.840 [2024-10-09 02:08:05.563572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.840 [2024-10-09 02:08:05.563588] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.840 [2024-10-09 02:08:05.563598] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.840 [2024-10-09 02:08:05.566023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.840 [2024-10-09 02:08:05.566044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.840 [2024-10-09 02:08:05.566118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.840 [2024-10-09 02:08:05.566122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 [2024-10-09 02:08:06.107464] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:26:46.407 [2024-10-09 02:08:06.117254] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:26:46.407 [2024-10-09 02:08:06.117289] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 Malloc0 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.407 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:46.408 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.408 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.408 [2024-10-09 02:08:06.225423] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.666 [ 00:26:46.666 { 00:26:46.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:46.666 "subtype": "Discovery", 00:26:46.666 "listen_addresses": [], 00:26:46.666 "allow_any_host": true, 00:26:46.666 "hosts": [] 00:26:46.666 }, 00:26:46.666 { 00:26:46.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.666 "subtype": "NVMe", 00:26:46.666 "listen_addresses": [ 00:26:46.666 { 00:26:46.666 "trtype": "RDMA", 00:26:46.666 "adrfam": "IPv4", 00:26:46.666 "traddr": "192.168.100.8", 00:26:46.666 "trsvcid": "4420" 00:26:46.666 } 00:26:46.666 ], 00:26:46.666 "allow_any_host": true, 00:26:46.666 "hosts": [], 00:26:46.666 "serial_number": "SPDK00000000000001", 00:26:46.666 "model_number": "SPDK bdev Controller", 00:26:46.666 "max_namespaces": 2, 00:26:46.666 "min_cntlid": 1, 00:26:46.666 "max_cntlid": 65519, 00:26:46.666 "namespaces": [ 00:26:46.666 { 00:26:46.666 "nsid": 1, 00:26:46.666 "bdev_name": "Malloc0", 00:26:46.666 "name": "Malloc0", 00:26:46.666 "nguid": "8F02449782DA461DA82007DBF7D6C714", 00:26:46.666 "uuid": "8f024497-82da-461d-a820-07dbf7d6c714" 00:26:46.666 } 00:26:46.666 ] 00:26:46.666 } 00:26:46.666 ] 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3334140 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:26:46.666 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:46.924 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.925 Malloc1 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.925 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:46.925 [ 00:26:46.925 { 00:26:46.925 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:46.925 "subtype": "Discovery", 00:26:46.925 "listen_addresses": [], 00:26:46.925 "allow_any_host": true, 00:26:46.925 "hosts": [] 00:26:46.925 }, 00:26:46.925 { 00:26:46.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.925 "subtype": "NVMe", 00:26:46.925 "listen_addresses": [ 00:26:46.925 { 00:26:46.925 "trtype": "RDMA", 00:26:46.925 "adrfam": "IPv4", 00:26:46.925 "traddr": "192.168.100.8", 00:26:46.925 "trsvcid": "4420" 00:26:46.925 } 00:26:46.925 ], 00:26:46.925 "allow_any_host": true, 00:26:46.925 "hosts": [], 00:26:46.925 "serial_number": "SPDK00000000000001", 00:26:46.925 "model_number": "SPDK bdev Controller", 00:26:46.925 "max_namespaces": 2, 00:26:46.925 "min_cntlid": 1, 00:26:46.925 "max_cntlid": 65519, 00:26:46.925 "namespaces": [ 00:26:46.925 { 00:26:46.925 "nsid": 1, 00:26:46.925 "bdev_name": "Malloc0", 00:26:46.925 "name": "Malloc0", 00:26:46.925 "nguid": "8F02449782DA461DA82007DBF7D6C714", 00:26:46.925 "uuid": "8f024497-82da-461d-a820-07dbf7d6c714" 00:26:46.925 }, 00:26:46.925 { 00:26:46.925 "nsid": 2, 00:26:46.925 "bdev_name": "Malloc1", 00:26:46.925 "name": "Malloc1", 00:26:47.183 "nguid": "78A1345529924A7F802CF486F82021EF", 00:26:47.183 "uuid": "78a13455-2992-4a7f-802c-f486f82021ef" 00:26:47.183 } 00:26:47.183 ] 00:26:47.183 } 00:26:47.183 ] 00:26:47.183 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.183 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3334140 00:26:47.183 Asynchronous Event Request test 00:26:47.183 Attaching to 192.168.100.8 00:26:47.183 Attached to 192.168.100.8 00:26:47.183 Registering asynchronous event callbacks... 00:26:47.183 Starting namespace attribute notice tests for all controllers... 00:26:47.183 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:47.183 aer_cb - Changed Namespace 00:26:47.183 Cleaning up... 00:26:47.183 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:47.183 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.183 02:08:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.441 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:47.441 rmmod nvme_rdma 00:26:47.699 rmmod nvme_fabrics 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3334017 ']' 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3334017 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3334017 ']' 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3334017 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3334017 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3334017' 00:26:47.699 killing process with pid 3334017 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3334017 00:26:47.699 02:08:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3334017 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:26:49.074 00:26:49.074 real 0m10.564s 00:26:49.074 user 0m13.075s 00:26:49.074 sys 0m6.001s 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.074 ************************************ 00:26:49.074 END TEST nvmf_aer 00:26:49.074 ************************************ 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.074 ************************************ 00:26:49.074 START TEST nvmf_async_init 00:26:49.074 ************************************ 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:49.074 * Looking for test storage... 00:26:49.074 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:26:49.074 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:49.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.334 --rc genhtml_branch_coverage=1 00:26:49.334 --rc genhtml_function_coverage=1 00:26:49.334 --rc genhtml_legend=1 00:26:49.334 --rc geninfo_all_blocks=1 00:26:49.334 --rc geninfo_unexecuted_blocks=1 00:26:49.334 00:26:49.334 ' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:49.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.334 --rc genhtml_branch_coverage=1 00:26:49.334 --rc genhtml_function_coverage=1 00:26:49.334 --rc genhtml_legend=1 00:26:49.334 --rc geninfo_all_blocks=1 00:26:49.334 --rc geninfo_unexecuted_blocks=1 00:26:49.334 00:26:49.334 ' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:49.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.334 --rc genhtml_branch_coverage=1 00:26:49.334 --rc genhtml_function_coverage=1 00:26:49.334 --rc genhtml_legend=1 00:26:49.334 --rc geninfo_all_blocks=1 00:26:49.334 --rc geninfo_unexecuted_blocks=1 00:26:49.334 00:26:49.334 ' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:49.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.334 --rc genhtml_branch_coverage=1 00:26:49.334 --rc genhtml_function_coverage=1 00:26:49.334 --rc genhtml_legend=1 00:26:49.334 --rc geninfo_all_blocks=1 00:26:49.334 --rc geninfo_unexecuted_blocks=1 00:26:49.334 00:26:49.334 ' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.334 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.335 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:49.335 02:08:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f8064bdf847a41b7b86f5ecb1fa31869 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.335 02:08:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:55.902 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:26:55.903 Found 0000:18:00.0 (0x8086 - 0x159b) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:26:55.903 Found 0000:18:00.1 (0x8086 - 0x159b) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@403 -- # modinfo irdma 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:26:55.903 Found net devices under 0000:18:00.0: cvl_0_0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:26:55.903 Found net devices under 0000:18:00.1: cvl_0_1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # rdma_device_init 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@528 -- # allocate_nic_ips 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:55.903 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:55.903 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:26:55.903 altname enp24s0f0np0 00:26:55.903 altname ens785f0np0 00:26:55.903 inet 192.168.100.8/24 scope global cvl_0_0 00:26:55.903 valid_lft forever preferred_lft forever 00:26:55.903 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:26:55.903 valid_lft forever preferred_lft forever 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:55.903 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:55.903 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:26:55.903 altname enp24s0f1np1 00:26:55.903 altname ens785f1np1 00:26:55.903 inet 192.168.100.9/24 scope global cvl_0_1 00:26:55.903 valid_lft forever preferred_lft forever 00:26:55.903 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:26:55.903 valid_lft forever preferred_lft forever 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:55.903 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:26:56.163 192.168.100.9' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:26:56.163 192.168.100.9' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # head -n 1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:26:56.163 192.168.100.9' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # tail -n +2 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # head -n 1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3337410 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3337410 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3337410 ']' 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.163 02:08:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.163 [2024-10-09 02:08:15.911352] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:26:56.163 [2024-10-09 02:08:15.911460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.422 [2024-10-09 02:08:16.039943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.680 [2024-10-09 02:08:16.241376] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.680 [2024-10-09 02:08:16.241433] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.680 [2024-10-09 02:08:16.241449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.680 [2024-10-09 02:08:16.241463] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.680 [2024-10-09 02:08:16.241474] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.680 [2024-10-09 02:08:16.242709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.939 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.939 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:26:56.939 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:56.939 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.939 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 [2024-10-09 02:08:16.785175] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:26:57.198 [2024-10-09 02:08:16.794519] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028b40/0x617000008340) succeed. 00:26:57.198 [2024-10-09 02:08:16.794567] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 null0 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f8064bdf847a41b7b86f5ecb1fa31869 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 [2024-10-09 02:08:16.832439] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 nvme0n1 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 [ 00:26:57.198 { 00:26:57.198 "name": "nvme0n1", 00:26:57.198 "aliases": [ 00:26:57.198 "f8064bdf-847a-41b7-b86f-5ecb1fa31869" 00:26:57.198 ], 00:26:57.198 "product_name": "NVMe disk", 00:26:57.198 "block_size": 512, 00:26:57.198 "num_blocks": 2097152, 00:26:57.198 "uuid": "f8064bdf-847a-41b7-b86f-5ecb1fa31869", 00:26:57.198 "numa_id": 0, 00:26:57.198 "assigned_rate_limits": { 00:26:57.198 "rw_ios_per_sec": 0, 00:26:57.198 "rw_mbytes_per_sec": 0, 00:26:57.198 "r_mbytes_per_sec": 0, 00:26:57.198 "w_mbytes_per_sec": 0 00:26:57.198 }, 00:26:57.198 "claimed": false, 00:26:57.198 "zoned": false, 00:26:57.198 "supported_io_types": { 00:26:57.198 "read": true, 00:26:57.198 "write": true, 00:26:57.198 "unmap": false, 00:26:57.198 "flush": true, 00:26:57.198 "reset": true, 00:26:57.198 "nvme_admin": true, 00:26:57.198 "nvme_io": true, 00:26:57.198 "nvme_io_md": false, 00:26:57.198 "write_zeroes": true, 00:26:57.198 "zcopy": false, 00:26:57.198 "get_zone_info": false, 00:26:57.198 "zone_management": false, 00:26:57.198 "zone_append": false, 00:26:57.198 "compare": true, 00:26:57.198 "compare_and_write": true, 00:26:57.198 "abort": true, 00:26:57.198 "seek_hole": false, 00:26:57.198 "seek_data": false, 00:26:57.198 "copy": true, 00:26:57.198 "nvme_iov_md": false 00:26:57.198 }, 00:26:57.198 "memory_domains": [ 00:26:57.198 { 00:26:57.198 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:57.198 "dma_device_type": 0 00:26:57.198 } 00:26:57.198 ], 00:26:57.198 "driver_specific": { 00:26:57.198 "nvme": [ 00:26:57.198 { 00:26:57.198 "trid": { 00:26:57.198 "trtype": "RDMA", 00:26:57.198 "adrfam": "IPv4", 00:26:57.198 "traddr": "192.168.100.8", 00:26:57.198 "trsvcid": "4420", 00:26:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:57.198 }, 00:26:57.198 "ctrlr_data": { 00:26:57.198 "cntlid": 1, 00:26:57.198 "vendor_id": "0x8086", 00:26:57.198 "model_number": "SPDK bdev Controller", 00:26:57.198 "serial_number": "00000000000000000000", 00:26:57.198 "firmware_revision": "25.01", 00:26:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.198 "oacs": { 00:26:57.198 "security": 0, 00:26:57.198 "format": 0, 00:26:57.198 "firmware": 0, 00:26:57.198 "ns_manage": 0 00:26:57.198 }, 00:26:57.198 "multi_ctrlr": true, 00:26:57.198 "ana_reporting": false 00:26:57.198 }, 00:26:57.198 "vs": { 00:26:57.198 "nvme_version": "1.3" 00:26:57.198 }, 00:26:57.198 "ns_data": { 00:26:57.198 "id": 1, 00:26:57.198 "can_share": true 00:26:57.198 } 00:26:57.198 } 00:26:57.198 ], 00:26:57.198 "mp_policy": "active_passive" 00:26:57.198 } 00:26:57.198 } 00:26:57.198 ] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.198 02:08:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.198 [2024-10-09 02:08:16.942352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.198 [2024-10-09 02:08:16.976903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:57.198 [2024-10-09 02:08:17.001050] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:57.198 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.198 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:57.199 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.199 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.199 [ 00:26:57.199 { 00:26:57.199 "name": "nvme0n1", 00:26:57.199 "aliases": [ 00:26:57.199 "f8064bdf-847a-41b7-b86f-5ecb1fa31869" 00:26:57.199 ], 00:26:57.199 "product_name": "NVMe disk", 00:26:57.199 "block_size": 512, 00:26:57.199 "num_blocks": 2097152, 00:26:57.199 "uuid": "f8064bdf-847a-41b7-b86f-5ecb1fa31869", 00:26:57.199 "numa_id": 0, 00:26:57.199 "assigned_rate_limits": { 00:26:57.199 "rw_ios_per_sec": 0, 00:26:57.199 "rw_mbytes_per_sec": 0, 00:26:57.199 "r_mbytes_per_sec": 0, 00:26:57.199 "w_mbytes_per_sec": 0 00:26:57.199 }, 00:26:57.199 "claimed": false, 00:26:57.199 "zoned": false, 00:26:57.199 "supported_io_types": { 00:26:57.199 "read": true, 00:26:57.199 "write": true, 00:26:57.199 "unmap": false, 00:26:57.199 "flush": true, 00:26:57.199 "reset": true, 00:26:57.199 "nvme_admin": true, 00:26:57.199 "nvme_io": true, 00:26:57.199 "nvme_io_md": false, 00:26:57.199 "write_zeroes": true, 00:26:57.199 "zcopy": false, 00:26:57.199 "get_zone_info": false, 00:26:57.199 "zone_management": false, 00:26:57.199 "zone_append": false, 00:26:57.199 "compare": true, 00:26:57.199 "compare_and_write": true, 00:26:57.199 "abort": true, 00:26:57.199 "seek_hole": false, 00:26:57.199 "seek_data": false, 00:26:57.199 "copy": true, 00:26:57.199 "nvme_iov_md": false 00:26:57.199 }, 00:26:57.199 "memory_domains": [ 00:26:57.199 { 00:26:57.199 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:57.458 "dma_device_type": 0 00:26:57.458 } 00:26:57.458 ], 00:26:57.458 "driver_specific": { 00:26:57.458 "nvme": [ 00:26:57.458 { 00:26:57.458 "trid": { 00:26:57.458 "trtype": "RDMA", 00:26:57.458 "adrfam": "IPv4", 00:26:57.458 "traddr": "192.168.100.8", 00:26:57.458 "trsvcid": "4420", 00:26:57.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:57.458 }, 00:26:57.458 "ctrlr_data": { 00:26:57.458 "cntlid": 2, 00:26:57.458 "vendor_id": "0x8086", 00:26:57.458 "model_number": "SPDK bdev Controller", 00:26:57.458 "serial_number": "00000000000000000000", 00:26:57.458 "firmware_revision": "25.01", 00:26:57.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.458 "oacs": { 00:26:57.458 "security": 0, 00:26:57.458 "format": 0, 00:26:57.458 "firmware": 0, 00:26:57.458 "ns_manage": 0 00:26:57.458 }, 00:26:57.458 "multi_ctrlr": true, 00:26:57.458 "ana_reporting": false 00:26:57.458 }, 00:26:57.458 "vs": { 00:26:57.458 "nvme_version": "1.3" 00:26:57.458 }, 00:26:57.458 "ns_data": { 00:26:57.458 "id": 1, 00:26:57.458 "can_share": true 00:26:57.458 } 00:26:57.458 } 00:26:57.458 ], 00:26:57.458 "mp_policy": "active_passive" 00:26:57.458 } 00:26:57.458 } 00:26:57.458 ] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gMzhyKwofe 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gMzhyKwofe 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.gMzhyKwofe 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 [2024-10-09 02:08:17.086144] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 [2024-10-09 02:08:17.102166] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:57.458 nvme0n1 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.458 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.458 [ 00:26:57.458 { 00:26:57.458 "name": "nvme0n1", 00:26:57.458 "aliases": [ 00:26:57.458 "f8064bdf-847a-41b7-b86f-5ecb1fa31869" 00:26:57.458 ], 00:26:57.458 "product_name": "NVMe disk", 00:26:57.458 "block_size": 512, 00:26:57.458 "num_blocks": 2097152, 00:26:57.458 "uuid": "f8064bdf-847a-41b7-b86f-5ecb1fa31869", 00:26:57.458 "numa_id": 0, 00:26:57.458 "assigned_rate_limits": { 00:26:57.458 "rw_ios_per_sec": 0, 00:26:57.458 "rw_mbytes_per_sec": 0, 00:26:57.458 "r_mbytes_per_sec": 0, 00:26:57.458 "w_mbytes_per_sec": 0 00:26:57.458 }, 00:26:57.458 "claimed": false, 00:26:57.458 "zoned": false, 00:26:57.458 "supported_io_types": { 00:26:57.458 "read": true, 00:26:57.458 "write": true, 00:26:57.458 "unmap": false, 00:26:57.458 "flush": true, 00:26:57.458 "reset": true, 00:26:57.458 "nvme_admin": true, 00:26:57.458 "nvme_io": true, 00:26:57.458 "nvme_io_md": false, 00:26:57.458 "write_zeroes": true, 00:26:57.458 "zcopy": false, 00:26:57.458 "get_zone_info": false, 00:26:57.458 "zone_management": false, 00:26:57.458 "zone_append": false, 00:26:57.458 "compare": true, 00:26:57.458 "compare_and_write": true, 00:26:57.458 "abort": true, 00:26:57.458 "seek_hole": false, 00:26:57.458 "seek_data": false, 00:26:57.458 "copy": true, 00:26:57.458 "nvme_iov_md": false 00:26:57.458 }, 00:26:57.458 "memory_domains": [ 00:26:57.458 { 00:26:57.458 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:57.458 "dma_device_type": 0 00:26:57.458 } 00:26:57.458 ], 00:26:57.458 "driver_specific": { 00:26:57.458 "nvme": [ 00:26:57.458 { 00:26:57.458 "trid": { 00:26:57.458 "trtype": "RDMA", 00:26:57.458 "adrfam": "IPv4", 00:26:57.458 "traddr": "192.168.100.8", 00:26:57.458 "trsvcid": "4421", 00:26:57.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:57.458 }, 00:26:57.458 "ctrlr_data": { 00:26:57.458 "cntlid": 3, 00:26:57.458 "vendor_id": "0x8086", 00:26:57.458 "model_number": "SPDK bdev Controller", 00:26:57.458 "serial_number": "00000000000000000000", 00:26:57.458 "firmware_revision": "25.01", 00:26:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.459 "oacs": { 00:26:57.459 "security": 0, 00:26:57.459 "format": 0, 00:26:57.459 "firmware": 0, 00:26:57.459 "ns_manage": 0 00:26:57.459 }, 00:26:57.459 "multi_ctrlr": true, 00:26:57.459 "ana_reporting": false 00:26:57.459 }, 00:26:57.459 "vs": { 00:26:57.459 "nvme_version": "1.3" 00:26:57.459 }, 00:26:57.459 "ns_data": { 00:26:57.459 "id": 1, 00:26:57.459 "can_share": true 00:26:57.459 } 00:26:57.459 } 00:26:57.459 ], 00:26:57.459 "mp_policy": "active_passive" 00:26:57.459 } 00:26:57.459 } 00:26:57.459 ] 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.gMzhyKwofe 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.459 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:57.459 rmmod nvme_rdma 00:26:57.459 rmmod nvme_fabrics 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3337410 ']' 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3337410 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3337410 ']' 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3337410 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3337410 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3337410' 00:26:57.717 killing process with pid 3337410 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3337410 00:26:57.717 02:08:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3337410 00:26:58.652 02:08:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:58.652 02:08:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:26:58.652 00:26:58.652 real 0m9.616s 00:26:58.652 user 0m4.628s 00:26:58.652 sys 0m5.759s 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 ************************************ 00:26:58.653 END TEST nvmf_async_init 00:26:58.653 ************************************ 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 ************************************ 00:26:58.653 START TEST dma 00:26:58.653 ************************************ 00:26:58.653 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:58.912 * Looking for test storage... 00:26:58.912 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.912 --rc genhtml_branch_coverage=1 00:26:58.912 --rc genhtml_function_coverage=1 00:26:58.912 --rc genhtml_legend=1 00:26:58.912 --rc geninfo_all_blocks=1 00:26:58.912 --rc geninfo_unexecuted_blocks=1 00:26:58.912 00:26:58.912 ' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.912 --rc genhtml_branch_coverage=1 00:26:58.912 --rc genhtml_function_coverage=1 00:26:58.912 --rc genhtml_legend=1 00:26:58.912 --rc geninfo_all_blocks=1 00:26:58.912 --rc geninfo_unexecuted_blocks=1 00:26:58.912 00:26:58.912 ' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.912 --rc genhtml_branch_coverage=1 00:26:58.912 --rc genhtml_function_coverage=1 00:26:58.912 --rc genhtml_legend=1 00:26:58.912 --rc geninfo_all_blocks=1 00:26:58.912 --rc geninfo_unexecuted_blocks=1 00:26:58.912 00:26:58.912 ' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.912 --rc genhtml_branch_coverage=1 00:26:58.912 --rc genhtml_function_coverage=1 00:26:58.912 --rc genhtml_legend=1 00:26:58.912 --rc geninfo_all_blocks=1 00:26:58.912 --rc geninfo_unexecuted_blocks=1 00:26:58.912 00:26:58.912 ' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.912 02:08:18 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.913 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.913 02:08:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:27:05.479 Found 0000:18:00.0 (0x8086 - 0x159b) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:27:05.479 Found 0000:18:00.1 (0x8086 - 0x159b) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@403 -- # modinfo irdma 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:27:05.479 Found net devices under 0000:18:00.0: cvl_0_0 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:27:05.479 Found net devices under 0000:18:00.1: cvl_0_1 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # is_hw=yes 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # rdma_device_init 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.479 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:27:05.480 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:05.480 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:27:05.480 altname enp24s0f0np0 00:27:05.480 altname ens785f0np0 00:27:05.480 inet 192.168.100.8/24 scope global cvl_0_0 00:27:05.480 valid_lft forever preferred_lft forever 00:27:05.480 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:27:05.480 valid_lft forever preferred_lft forever 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:27:05.480 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:05.480 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:27:05.480 altname enp24s0f1np1 00:27:05.480 altname ens785f1np1 00:27:05.480 inet 192.168.100.9/24 scope global cvl_0_1 00:27:05.480 valid_lft forever preferred_lft forever 00:27:05.480 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:27:05.480 valid_lft forever preferred_lft forever 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # return 0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:27:05.480 192.168.100.9' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:27:05.480 192.168.100.9' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # head -n 1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:27:05.480 192.168.100.9' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # tail -n +2 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # head -n 1 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # nvmfpid=3340594 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # waitforlisten 3340594 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 3340594 ']' 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.480 02:08:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.480 [2024-10-09 02:08:24.970394] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:05.480 [2024-10-09 02:08:24.970520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.480 [2024-10-09 02:08:25.102976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.480 [2024-10-09 02:08:25.293056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.480 [2024-10-09 02:08:25.293111] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.480 [2024-10-09 02:08:25.293125] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.480 [2024-10-09 02:08:25.293139] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.480 [2024-10-09 02:08:25.293149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.480 [2024-10-09 02:08:25.294769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.480 [2024-10-09 02:08:25.294780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.047 [2024-10-09 02:08:25.846621] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028cc0/0x617000007c40) succeed. 00:27:06.047 [2024-10-09 02:08:25.856337] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028e40/0x617000007fc0) succeed. 00:27:06.047 [2024-10-09 02:08:25.856374] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.047 02:08:25 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.305 Malloc0 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.305 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.305 [2024-10-09 02:08:26.120279] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # config=() 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # local subsystem config 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:06.562 { 00:27:06.562 "params": { 00:27:06.562 "name": "Nvme$subsystem", 00:27:06.562 "trtype": "$TEST_TRANSPORT", 00:27:06.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.562 "adrfam": "ipv4", 00:27:06.562 "trsvcid": "$NVMF_PORT", 00:27:06.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.562 "hdgst": ${hdgst:-false}, 00:27:06.562 "ddgst": ${ddgst:-false} 00:27:06.562 }, 00:27:06.562 "method": "bdev_nvme_attach_controller" 00:27:06.562 } 00:27:06.562 EOF 00:27:06.562 )") 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # cat 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # jq . 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@583 -- # IFS=, 00:27:06.562 02:08:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:06.562 "params": { 00:27:06.562 "name": "Nvme0", 00:27:06.562 "trtype": "rdma", 00:27:06.562 "traddr": "192.168.100.8", 00:27:06.562 "adrfam": "ipv4", 00:27:06.562 "trsvcid": "4420", 00:27:06.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:06.562 "hdgst": false, 00:27:06.562 "ddgst": false 00:27:06.562 }, 00:27:06.562 "method": "bdev_nvme_attach_controller" 00:27:06.562 }' 00:27:06.562 [2024-10-09 02:08:26.208788] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:06.562 [2024-10-09 02:08:26.208888] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340788 ] 00:27:06.562 [2024-10-09 02:08:26.333647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.820 [2024-10-09 02:08:26.535044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.820 [2024-10-09 02:08:26.535055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.381 bdev Nvme0n1 reports 1 memory domains 00:27:13.381 bdev Nvme0n1 supports RDMA memory domain 00:27:13.381 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:13.381 ========================================================================== 00:27:13.381 Latency [us] 00:27:13.381 IOPS MiB/s Average min max 00:27:13.381 Core 2: 18975.42 74.12 842.52 316.65 15126.63 00:27:13.381 Core 3: 18740.24 73.20 853.06 316.03 15151.13 00:27:13.381 ========================================================================== 00:27:13.381 Total : 37715.66 147.33 847.75 316.03 15151.13 00:27:13.381 00:27:13.381 Total operations: 188598, translate 188598 pull_push 0 memzero 0 00:27:13.381 02:08:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:27:13.381 02:08:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:27:13.381 02:08:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:27:13.381 [2024-10-09 02:08:33.096258] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:13.381 [2024-10-09 02:08:33.096356] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341667 ] 00:27:13.639 [2024-10-09 02:08:33.217958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:13.639 [2024-10-09 02:08:33.413874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.639 [2024-10-09 02:08:33.413883] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.749 bdev Malloc0 reports 2 memory domains 00:27:21.749 bdev Malloc0 doesn't support RDMA memory domain 00:27:21.749 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:21.749 ========================================================================== 00:27:21.749 Latency [us] 00:27:21.749 IOPS MiB/s Average min max 00:27:21.749 Core 2: 12100.84 47.27 1321.30 480.24 1799.30 00:27:21.749 Core 3: 12321.94 48.13 1297.58 482.56 1653.84 00:27:21.749 ========================================================================== 00:27:21.749 Total : 24422.79 95.40 1309.33 480.24 1799.30 00:27:21.749 00:27:21.749 Total operations: 122170, translate 0 pull_push 488680 memzero 0 00:27:21.749 02:08:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:27:21.749 02:08:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:27:21.749 02:08:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:21.749 02:08:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:21.749 Ignoring -M option 00:27:21.749 [2024-10-09 02:08:40.361157] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:21.749 [2024-10-09 02:08:40.361268] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342545 ] 00:27:21.749 [2024-10-09 02:08:40.484963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.749 [2024-10-09 02:08:40.683950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.749 [2024-10-09 02:08:40.683957] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.307 bdev 73e4aeb4-df13-4945-8416-48c4f79e3fe4 reports 1 memory domains 00:27:28.307 bdev 73e4aeb4-df13-4945-8416-48c4f79e3fe4 supports RDMA memory domain 00:27:28.307 Initialization complete, running randread IO for 5 sec on 2 cores 00:27:28.307 ========================================================================== 00:27:28.307 Latency [us] 00:27:28.307 IOPS MiB/s Average min max 00:27:28.307 Core 2: 61816.27 241.47 257.88 96.51 4260.51 00:27:28.307 Core 3: 64020.20 250.08 249.01 77.83 3496.21 00:27:28.307 ========================================================================== 00:27:28.307 Total : 125836.47 491.55 253.37 77.83 4260.51 00:27:28.307 00:27:28.307 Total operations: 629261, translate 0 pull_push 0 memzero 629261 00:27:28.307 02:08:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:27:28.307 [2024-10-09 02:08:47.389502] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:30.209 Initializing NVMe Controllers 00:27:30.209 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:27:30.209 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:30.209 Initialization complete. Launching workers. 00:27:30.209 ======================================================== 00:27:30.209 Latency(us) 00:27:30.209 Device Information : IOPS MiB/s Average min max 00:27:30.209 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7987.67 4466.93 11973.41 00:27:30.209 ======================================================== 00:27:30.209 Total : 2016.00 7.88 7987.67 4466.93 11973.41 00:27:30.209 00:27:30.209 02:08:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:27:30.209 02:08:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:27:30.209 02:08:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:30.209 02:08:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:30.209 [2024-10-09 02:08:49.867139] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:30.209 [2024-10-09 02:08:49.867239] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343766 ] 00:27:30.209 [2024-10-09 02:08:49.989196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:30.467 [2024-10-09 02:08:50.191158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.468 [2024-10-09 02:08:50.191168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.074 bdev 6794d3d2-2db4-4215-ac41-5ad6dc1ed21b reports 1 memory domains 00:27:37.074 bdev 6794d3d2-2db4-4215-ac41-5ad6dc1ed21b supports RDMA memory domain 00:27:37.074 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:37.074 ========================================================================== 00:27:37.074 Latency [us] 00:27:37.074 IOPS MiB/s Average min max 00:27:37.074 Core 2: 17291.56 67.55 924.54 14.34 11899.49 00:27:37.074 Core 3: 17566.90 68.62 910.06 21.57 12122.00 00:27:37.074 ========================================================================== 00:27:37.074 Total : 34858.46 136.17 917.24 14.34 12122.00 00:27:37.074 00:27:37.075 Total operations: 174335, translate 174230 pull_push 0 memzero 105 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:37.075 rmmod nvme_rdma 00:27:37.075 rmmod nvme_fabrics 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@515 -- # '[' -n 3340594 ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # killprocess 3340594 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 3340594 ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 3340594 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3340594 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3340594' 00:27:37.075 killing process with pid 3340594 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 3340594 00:27:37.075 02:08:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 3340594 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:27:39.613 00:27:39.613 real 0m40.383s 00:27:39.613 user 1m59.700s 00:27:39.613 sys 0m6.936s 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:39.613 ************************************ 00:27:39.613 END TEST dma 00:27:39.613 ************************************ 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.613 ************************************ 00:27:39.613 START TEST nvmf_identify 00:27:39.613 ************************************ 00:27:39.613 02:08:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:39.613 * Looking for test storage... 00:27:39.613 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.613 --rc genhtml_branch_coverage=1 00:27:39.613 --rc genhtml_function_coverage=1 00:27:39.613 --rc genhtml_legend=1 00:27:39.613 --rc geninfo_all_blocks=1 00:27:39.613 --rc geninfo_unexecuted_blocks=1 00:27:39.613 00:27:39.613 ' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.613 --rc genhtml_branch_coverage=1 00:27:39.613 --rc genhtml_function_coverage=1 00:27:39.613 --rc genhtml_legend=1 00:27:39.613 --rc geninfo_all_blocks=1 00:27:39.613 --rc geninfo_unexecuted_blocks=1 00:27:39.613 00:27:39.613 ' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.613 --rc genhtml_branch_coverage=1 00:27:39.613 --rc genhtml_function_coverage=1 00:27:39.613 --rc genhtml_legend=1 00:27:39.613 --rc geninfo_all_blocks=1 00:27:39.613 --rc geninfo_unexecuted_blocks=1 00:27:39.613 00:27:39.613 ' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.613 --rc genhtml_branch_coverage=1 00:27:39.613 --rc genhtml_function_coverage=1 00:27:39.613 --rc genhtml_legend=1 00:27:39.613 --rc geninfo_all_blocks=1 00:27:39.613 --rc geninfo_unexecuted_blocks=1 00:27:39.613 00:27:39.613 ' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.613 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.614 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.614 02:08:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.182 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:27:46.183 Found 0000:18:00.0 (0x8086 - 0x159b) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:27:46.183 Found 0000:18:00.1 (0x8086 - 0x159b) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@403 -- # modinfo irdma 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:27:46.183 Found net devices under 0000:18:00.0: cvl_0_0 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:27:46.183 Found net devices under 0000:18:00.1: cvl_0_1 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # rdma_device_init 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@528 -- # allocate_nic_ips 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:46.183 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:27:46.183 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:46.183 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:27:46.183 altname enp24s0f0np0 00:27:46.183 altname ens785f0np0 00:27:46.183 inet 192.168.100.8/24 scope global cvl_0_0 00:27:46.184 valid_lft forever preferred_lft forever 00:27:46.184 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:27:46.184 valid_lft forever preferred_lft forever 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:27:46.184 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:46.184 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:27:46.184 altname enp24s0f1np1 00:27:46.184 altname ens785f1np1 00:27:46.184 inet 192.168.100.9/24 scope global cvl_0_1 00:27:46.184 valid_lft forever preferred_lft forever 00:27:46.184 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:27:46.184 valid_lft forever preferred_lft forever 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:27:46.184 192.168.100.9' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:27:46.184 192.168.100.9' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # head -n 1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:27:46.184 192.168.100.9' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # tail -n +2 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # head -n 1 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3348237 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3348237 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3348237 ']' 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.184 02:09:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.184 [2024-10-09 02:09:05.507227] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:46.184 [2024-10-09 02:09:05.507328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.184 [2024-10-09 02:09:05.640549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.184 [2024-10-09 02:09:05.844225] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.184 [2024-10-09 02:09:05.844276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.184 [2024-10-09 02:09:05.844289] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.184 [2024-10-09 02:09:05.844302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.184 [2024-10-09 02:09:05.844311] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.184 [2024-10-09 02:09:05.846626] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.184 [2024-10-09 02:09:05.846645] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.184 [2024-10-09 02:09:05.846715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.184 [2024-10-09 02:09:05.846721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 [2024-10-09 02:09:06.339240] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:27:46.752 [2024-10-09 02:09:06.348908] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:27:46.752 [2024-10-09 02:09:06.348943] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 Malloc0 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 [2024-10-09 02:09:06.496668] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.752 [ 00:27:46.752 { 00:27:46.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.752 "subtype": "Discovery", 00:27:46.752 "listen_addresses": [ 00:27:46.752 { 00:27:46.752 "trtype": "RDMA", 00:27:46.752 "adrfam": "IPv4", 00:27:46.752 "traddr": "192.168.100.8", 00:27:46.752 "trsvcid": "4420" 00:27:46.752 } 00:27:46.752 ], 00:27:46.752 "allow_any_host": true, 00:27:46.752 "hosts": [] 00:27:46.752 }, 00:27:46.752 { 00:27:46.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.752 "subtype": "NVMe", 00:27:46.752 "listen_addresses": [ 00:27:46.752 { 00:27:46.752 "trtype": "RDMA", 00:27:46.752 "adrfam": "IPv4", 00:27:46.752 "traddr": "192.168.100.8", 00:27:46.752 "trsvcid": "4420" 00:27:46.752 } 00:27:46.752 ], 00:27:46.752 "allow_any_host": true, 00:27:46.752 "hosts": [], 00:27:46.752 "serial_number": "SPDK00000000000001", 00:27:46.752 "model_number": "SPDK bdev Controller", 00:27:46.752 "max_namespaces": 32, 00:27:46.752 "min_cntlid": 1, 00:27:46.752 "max_cntlid": 65519, 00:27:46.752 "namespaces": [ 00:27:46.752 { 00:27:46.752 "nsid": 1, 00:27:46.752 "bdev_name": "Malloc0", 00:27:46.752 "name": "Malloc0", 00:27:46.752 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:46.752 "eui64": "ABCDEF0123456789", 00:27:46.752 "uuid": "b6052dc7-da5e-4c5f-bc6e-bebdfd593982" 00:27:46.752 } 00:27:46.752 ] 00:27:46.752 } 00:27:46.752 ] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.752 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:47.014 [2024-10-09 02:09:06.580799] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:47.014 [2024-10-09 02:09:06.580879] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348436 ] 00:27:47.014 [2024-10-09 02:09:06.640873] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:47.014 [2024-10-09 02:09:06.640981] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:47.014 [2024-10-09 02:09:06.641008] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:47.014 [2024-10-09 02:09:06.641017] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:47.014 [2024-10-09 02:09:06.641065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:47.014 [2024-10-09 02:09:06.651916] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:47.014 [2024-10-09 02:09:06.666597] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:47.014 [2024-10-09 02:09:06.666619] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:47.014 [2024-10-09 02:09:06.666636] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666647] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666660] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666670] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666681] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666689] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf308 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666699] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf330 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666708] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf358 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666718] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf380 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666727] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3a8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666737] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3d0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666745] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3f8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666755] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf420 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666763] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf448 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666775] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf470 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666784] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf498 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666794] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4c0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666805] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4e8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666817] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf510 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666829] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf538 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666840] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf560 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666849] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf588 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666860] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5b0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666868] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5d8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666886] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666894] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666904] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666912] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666922] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666931] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666940] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.666948] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:47.014 [2024-10-09 02:09:06.666961] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:47.014 [2024-10-09 02:09:06.666971] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:47.014 [2024-10-09 02:09:06.667005] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.667026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003ced80 len:0x400 key:0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.671552] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.014 [2024-10-09 02:09:06.671576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:47.014 [2024-10-09 02:09:06.671592] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.671605] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:47.014 [2024-10-09 02:09:06.671622] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:47.014 [2024-10-09 02:09:06.671634] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:47.014 [2024-10-09 02:09:06.671657] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.014 [2024-10-09 02:09:06.671671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.014 [2024-10-09 02:09:06.671722] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.014 [2024-10-09 02:09:06.671732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.671746] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:47.015 [2024-10-09 02:09:06.671759] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671774] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:47.015 [2024-10-09 02:09:06.671786] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.671822] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.671833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.671843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:47.015 [2024-10-09 02:09:06.671854] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.671879] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.671924] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.671933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.671945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.671954] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671971] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.671982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.672011] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672033] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:47.015 [2024-10-09 02:09:06.672046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.672057] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.672180] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:47.015 [2024-10-09 02:09:06.672189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.672205] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.672248] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:47.015 [2024-10-09 02:09:06.672280] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf308 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672299] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.672344] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672368] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:47.015 [2024-10-09 02:09:06.672377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672388] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf330 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672399] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:47.015 [2024-10-09 02:09:06.672413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672435] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672511] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672544] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:47.015 [2024-10-09 02:09:06.672556] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:47.015 [2024-10-09 02:09:06.672565] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:47.015 [2024-10-09 02:09:06.672577] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 6 00:27:47.015 [2024-10-09 02:09:06.672588] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:47.015 [2024-10-09 02:09:06.672599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672608] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf358 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672639] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.672693] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672715] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d01c0 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.015 [2024-10-09 02:09:06.672745] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0300 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.015 [2024-10-09 02:09:06.672767] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.015 [2024-10-09 02:09:06.672789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0580 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.015 [2024-10-09 02:09:06.672809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672820] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf380 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:47.015 [2024-10-09 02:09:06.672858] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.015 [2024-10-09 02:09:06.672901] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.672910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.672923] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:47.015 [2024-10-09 02:09:06.672933] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:47.015 [2024-10-09 02:09:06.672944] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3a8 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672960] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.672977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673014] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.673026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.673044] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3d0 length 0x10 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673064] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:47.015 [2024-10-09 02:09:06.673112] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673146] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.015 [2024-10-09 02:09:06.673213] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.015 [2024-10-09 02:09:06.673225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:47.015 [2024-10-09 02:09:06.673248] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0800 length 0x40 lkey 0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x9581b0de 00:27:47.015 [2024-10-09 02:09:06.673275] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3f8 length 0x10 lkey 0x9581b0de 00:27:47.016 [2024-10-09 02:09:06.673288] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.016 [2024-10-09 02:09:06.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:47.016 [2024-10-09 02:09:06.673310] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf420 length 0x10 lkey 0x9581b0de 00:27:47.016 [2024-10-09 02:09:06.673319] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.016 [2024-10-09 02:09:06.673329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:47.016 [2024-10-09 02:09:06.673345] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0x9581b0de 00:27:47.016 [2024-10-09 02:09:06.673359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x9581b0de 00:27:47.016 [2024-10-09 02:09:06.673369] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf448 length 0x10 lkey 0x9581b0de 00:27:47.016 [2024-10-09 02:09:06.673422] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.016 [2024-10-09 02:09:06.673431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:47.016 [2024-10-09 02:09:06.673451] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf470 length 0x10 lkey 0x9581b0de 00:27:47.016 ===================================================== 00:27:47.016 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:47.016 ===================================================== 00:27:47.016 Controller Capabilities/Features 00:27:47.016 ================================ 00:27:47.016 Vendor ID: 0000 00:27:47.016 Subsystem Vendor ID: 0000 00:27:47.016 Serial Number: .................... 00:27:47.016 Model Number: ........................................ 00:27:47.016 Firmware Version: 25.01 00:27:47.016 Recommended Arb Burst: 0 00:27:47.016 IEEE OUI Identifier: 00 00 00 00:27:47.016 Multi-path I/O 00:27:47.016 May have multiple subsystem ports: No 00:27:47.016 May have multiple controllers: No 00:27:47.016 Associated with SR-IOV VF: No 00:27:47.016 Max Data Transfer Size: 131072 00:27:47.016 Max Number of Namespaces: 0 00:27:47.016 Max Number of I/O Queues: 1024 00:27:47.016 NVMe Specification Version (VS): 1.3 00:27:47.016 NVMe Specification Version (Identify): 1.3 00:27:47.016 Maximum Queue Entries: 128 00:27:47.016 Contiguous Queues Required: Yes 00:27:47.016 Arbitration Mechanisms Supported 00:27:47.016 Weighted Round Robin: Not Supported 00:27:47.016 Vendor Specific: Not Supported 00:27:47.016 Reset Timeout: 15000 ms 00:27:47.016 Doorbell Stride: 4 bytes 00:27:47.016 NVM Subsystem Reset: Not Supported 00:27:47.016 Command Sets Supported 00:27:47.016 NVM Command Set: Supported 00:27:47.016 Boot Partition: Not Supported 00:27:47.016 Memory Page Size Minimum: 4096 bytes 00:27:47.016 Memory Page Size Maximum: 4096 bytes 00:27:47.016 Persistent Memory Region: Not Supported 00:27:47.016 Optional Asynchronous Events Supported 00:27:47.016 Namespace Attribute Notices: Not Supported 00:27:47.016 Firmware Activation Notices: Not Supported 00:27:47.016 ANA Change Notices: Not Supported 00:27:47.016 PLE Aggregate Log Change Notices: Not Supported 00:27:47.016 LBA Status Info Alert Notices: Not Supported 00:27:47.016 EGE Aggregate Log Change Notices: Not Supported 00:27:47.016 Normal NVM Subsystem Shutdown event: Not Supported 00:27:47.016 Zone Descriptor Change Notices: Not Supported 00:27:47.016 Discovery Log Change Notices: Supported 00:27:47.016 Controller Attributes 00:27:47.016 128-bit Host Identifier: Not Supported 00:27:47.016 Non-Operational Permissive Mode: Not Supported 00:27:47.016 NVM Sets: Not Supported 00:27:47.016 Read Recovery Levels: Not Supported 00:27:47.016 Endurance Groups: Not Supported 00:27:47.016 Predictable Latency Mode: Not Supported 00:27:47.016 Traffic Based Keep ALive: Not Supported 00:27:47.016 Namespace Granularity: Not Supported 00:27:47.016 SQ Associations: Not Supported 00:27:47.016 UUID List: Not Supported 00:27:47.016 Multi-Domain Subsystem: Not Supported 00:27:47.016 Fixed Capacity Management: Not Supported 00:27:47.016 Variable Capacity Management: Not Supported 00:27:47.016 Delete Endurance Group: Not Supported 00:27:47.016 Delete NVM Set: Not Supported 00:27:47.016 Extended LBA Formats Supported: Not Supported 00:27:47.016 Flexible Data Placement Supported: Not Supported 00:27:47.016 00:27:47.016 Controller Memory Buffer Support 00:27:47.016 ================================ 00:27:47.016 Supported: No 00:27:47.016 00:27:47.016 Persistent Memory Region Support 00:27:47.016 ================================ 00:27:47.016 Supported: No 00:27:47.016 00:27:47.016 Admin Command Set Attributes 00:27:47.016 ============================ 00:27:47.016 Security Send/Receive: Not Supported 00:27:47.016 Format NVM: Not Supported 00:27:47.016 Firmware Activate/Download: Not Supported 00:27:47.016 Namespace Management: Not Supported 00:27:47.016 Device Self-Test: Not Supported 00:27:47.016 Directives: Not Supported 00:27:47.016 NVMe-MI: Not Supported 00:27:47.016 Virtualization Management: Not Supported 00:27:47.016 Doorbell Buffer Config: Not Supported 00:27:47.016 Get LBA Status Capability: Not Supported 00:27:47.016 Command & Feature Lockdown Capability: Not Supported 00:27:47.016 Abort Command Limit: 1 00:27:47.016 Async Event Request Limit: 4 00:27:47.016 Number of Firmware Slots: N/A 00:27:47.016 Firmware Slot 1 Read-Only: N/A 00:27:47.016 Firmware Activation Without Reset: N/A 00:27:47.016 Multiple Update Detection Support: N/A 00:27:47.016 Firmware Update Granularity: No Information Provided 00:27:47.016 Per-Namespace SMART Log: No 00:27:47.016 Asymmetric Namespace Access Log Page: Not Supported 00:27:47.016 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:47.016 Command Effects Log Page: Not Supported 00:27:47.016 Get Log Page Extended Data: Supported 00:27:47.016 Telemetry Log Pages: Not Supported 00:27:47.016 Persistent Event Log Pages: Not Supported 00:27:47.016 Supported Log Pages Log Page: May Support 00:27:47.016 Commands Supported & Effects Log Page: Not Supported 00:27:47.016 Feature Identifiers & Effects Log Page:May Support 00:27:47.016 NVMe-MI Commands & Effects Log Page: May Support 00:27:47.016 Data Area 4 for Telemetry Log: Not Supported 00:27:47.016 Error Log Page Entries Supported: 128 00:27:47.016 Keep Alive: Not Supported 00:27:47.016 00:27:47.016 NVM Command Set Attributes 00:27:47.016 ========================== 00:27:47.016 Submission Queue Entry Size 00:27:47.016 Max: 1 00:27:47.016 Min: 1 00:27:47.016 Completion Queue Entry Size 00:27:47.016 Max: 1 00:27:47.016 Min: 1 00:27:47.016 Number of Namespaces: 0 00:27:47.016 Compare Command: Not Supported 00:27:47.016 Write Uncorrectable Command: Not Supported 00:27:47.016 Dataset Management Command: Not Supported 00:27:47.016 Write Zeroes Command: Not Supported 00:27:47.016 Set Features Save Field: Not Supported 00:27:47.016 Reservations: Not Supported 00:27:47.016 Timestamp: Not Supported 00:27:47.016 Copy: Not Supported 00:27:47.016 Volatile Write Cache: Not Present 00:27:47.016 Atomic Write Unit (Normal): 1 00:27:47.016 Atomic Write Unit (PFail): 1 00:27:47.016 Atomic Compare & Write Unit: 1 00:27:47.016 Fused Compare & Write: Supported 00:27:47.016 Scatter-Gather List 00:27:47.016 SGL Command Set: Supported 00:27:47.016 SGL Keyed: Supported 00:27:47.016 SGL Bit Bucket Descriptor: Not Supported 00:27:47.016 SGL Metadata Pointer: Not Supported 00:27:47.016 Oversized SGL: Not Supported 00:27:47.016 SGL Metadata Address: Not Supported 00:27:47.017 SGL Offset: Supported 00:27:47.017 Transport SGL Data Block: Not Supported 00:27:47.017 Replay Protected Memory Block: Not Supported 00:27:47.017 00:27:47.017 Firmware Slot Information 00:27:47.017 ========================= 00:27:47.017 Active slot: 0 00:27:47.017 00:27:47.017 00:27:47.017 Error Log 00:27:47.017 ========= 00:27:47.017 00:27:47.017 Active Namespaces 00:27:47.017 ================= 00:27:47.017 Discovery Log Page 00:27:47.017 ================== 00:27:47.017 Generation Counter: 2 00:27:47.017 Number of Records: 2 00:27:47.017 Record Format: 0 00:27:47.017 00:27:47.017 Discovery Log Entry 0 00:27:47.017 ---------------------- 00:27:47.017 Transport Type: 1 (RDMA) 00:27:47.017 Address Family: 1 (IPv4) 00:27:47.017 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:47.017 Entry Flags: 00:27:47.017 Duplicate Returned Information: 1 00:27:47.017 Explicit Persistent Connection Support for Discovery: 1 00:27:47.017 Transport Requirements: 00:27:47.017 Secure Channel: Not Required 00:27:47.017 Port ID: 0 (0x0000) 00:27:47.017 Controller ID: 65535 (0xffff) 00:27:47.017 Admin Max SQ Size: 128 00:27:47.017 Transport Service Identifier: 4420 00:27:47.017 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:47.017 Transport Address: 192.168.100.8 00:27:47.017 Transport Specific Address Subtype - RDMA 00:27:47.017 RDMA QP Service Type: 1 (Reliable Connected) 00:27:47.017 RDMA Provider Type: 1 (No provider specified) 00:27:47.017 RDMA CM Service: 1 (RDMA_CM) 00:27:47.017 Discovery Log Entry 1 00:27:47.017 ---------------------- 00:27:47.017 Transport Type: 1 (RDMA) 00:27:47.017 Address Family: 1 (IPv4) 00:27:47.017 Subsystem Type: 2 (NVM Subsystem) 00:27:47.017 Entry Flags: 00:27:47.017 Duplicate Returned Information: 0 00:27:47.017 Explicit Persistent Connection Support for Discovery: 0 00:27:47.017 Transport Requirements: 00:27:47.017 Secure Channel: Not Required 00:27:47.017 Port ID: 0 (0x0000) 00:27:47.017 Controller ID: 65535 (0xffff) 00:27:47.017 Admin Max SQ Size: [2024-10-09 02:09:06.673583] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:47.017 [2024-10-09 02:09:06.673603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673661] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0580 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.673701] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.673711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673727] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.673752] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf498 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673781] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.673792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673804] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:47.017 [2024-10-09 02:09:06.673820] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:47.017 [2024-10-09 02:09:06.673832] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4c0 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673847] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.673889] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.673897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673909] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4e8 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673922] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.673962] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.673973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.673982] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf510 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.673996] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674041] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674063] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf538 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674075] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674117] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674137] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf560 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674154] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674199] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674219] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf588 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674237] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674277] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674297] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5b0 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674313] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674358] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674378] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5d8 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674390] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674432] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.017 [2024-10-09 02:09:06.674443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:47.017 [2024-10-09 02:09:06.674454] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674470] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.017 [2024-10-09 02:09:06.674481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.017 [2024-10-09 02:09:06.674512] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674532] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674550] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674588] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674608] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674622] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674666] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674686] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674699] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674739] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674758] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674777] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674820] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674839] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674856] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674899] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674919] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674935] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.674946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.674972] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.674980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.674991] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675003] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675046] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675070] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675089] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675127] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675147] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675159] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675202] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675222] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675236] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675278] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675300] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675312] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675353] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675373] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf308 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675387] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675446] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.675466] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf330 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675481] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.675494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.675521] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.675532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.679562] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf358 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.679590] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.679603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.018 [2024-10-09 02:09:06.679647] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.018 [2024-10-09 02:09:06.679656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:27:47.018 [2024-10-09 02:09:06.679667] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf380 length 0x10 lkey 0x9581b0de 00:27:47.018 [2024-10-09 02:09:06.679683] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:47.018 128 00:27:47.018 Transport Service Identifier: 4420 00:27:47.018 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:47.018 Transport Address: 192.168.100.8 00:27:47.018 Transport Specific Address Subtype - RDMA 00:27:47.018 RDMA QP Service Type: 1 (Reliable Connected) 00:27:47.018 RDMA Provider Type: 1 (No provider specified) 00:27:47.018 RDMA CM Service: 1 (RDMA_CM) 00:27:47.018 02:09:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:47.280 [2024-10-09 02:09:06.836261] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:47.280 [2024-10-09 02:09:06.836342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348448 ] 00:27:47.280 [2024-10-09 02:09:06.901195] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:47.280 [2024-10-09 02:09:06.901307] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:47.280 [2024-10-09 02:09:06.901338] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:47.280 [2024-10-09 02:09:06.901347] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:47.280 [2024-10-09 02:09:06.901393] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:47.280 [2024-10-09 02:09:06.911842] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:47.280 [2024-10-09 02:09:06.922535] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:47.280 [2024-10-09 02:09:06.922561] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:47.280 [2024-10-09 02:09:06.922578] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922590] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922603] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922614] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922624] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922633] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf308 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922646] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf330 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922655] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf358 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922665] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf380 length 0x10 lkey 0xd71d6a8 00:27:47.280 [2024-10-09 02:09:06.922674] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3a8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922687] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3d0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922695] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3f8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922705] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf420 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922714] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf448 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922724] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf470 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922732] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf498 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922742] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4c0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922751] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4e8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922763] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf510 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922775] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf538 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922785] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf560 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922795] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf588 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922805] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5b0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922813] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5d8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922831] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922839] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922849] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922857] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922867] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922875] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922886] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922894] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:47.281 [2024-10-09 02:09:06.922905] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:47.281 [2024-10-09 02:09:06.922915] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:47.281 [2024-10-09 02:09:06.922948] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.922969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003ced80 len:0x400 key:0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927554] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.927583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.927600] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927613] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:47.281 [2024-10-09 02:09:06.927631] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:47.281 [2024-10-09 02:09:06.927642] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:47.281 [2024-10-09 02:09:06.927665] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.927716] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.927726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.927743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:47.281 [2024-10-09 02:09:06.927753] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:47.281 [2024-10-09 02:09:06.927780] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.927816] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.927827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.927837] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:47.281 [2024-10-09 02:09:06.927848] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.927872] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.927918] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.927928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.927941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.927951] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927968] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.927980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.928010] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.928018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.928034] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:47.281 [2024-10-09 02:09:06.928043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.928057] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.928179] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:47.281 [2024-10-09 02:09:06.928187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.928202] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.928248] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.928257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.928270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:47.281 [2024-10-09 02:09:06.928279] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf308 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928294] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.928344] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.928352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.928370] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:47.281 [2024-10-09 02:09:06.928381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:47.281 [2024-10-09 02:09:06.928393] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf330 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928403] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:47.281 [2024-10-09 02:09:06.928421] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:47.281 [2024-10-09 02:09:06.928443] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928534] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.928550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:47.281 [2024-10-09 02:09:06.928566] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:47.281 [2024-10-09 02:09:06.928578] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:47.281 [2024-10-09 02:09:06.928589] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:47.281 [2024-10-09 02:09:06.928601] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 6 00:27:47.281 [2024-10-09 02:09:06.928610] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:47.281 [2024-10-09 02:09:06.928624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:47.281 [2024-10-09 02:09:06.928634] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf358 length 0x10 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:47.281 [2024-10-09 02:09:06.928663] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.281 [2024-10-09 02:09:06.928678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.281 [2024-10-09 02:09:06.928714] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.281 [2024-10-09 02:09:06.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.928737] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d01c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.282 [2024-10-09 02:09:06.928765] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0300 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.282 [2024-10-09 02:09:06.928788] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.282 [2024-10-09 02:09:06.928810] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0580 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.282 [2024-10-09 02:09:06.928830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.928841] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf380 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.928870] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.928912] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.928921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.928935] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:47.282 [2024-10-09 02:09:06.928945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.928959] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3a8 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.928969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.928982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.928993] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.929038] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929144] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3d0 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929185] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929244] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929278] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:47.282 [2024-10-09 02:09:06.929295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929307] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3f8 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929335] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929423] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929465] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf420 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929499] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929563] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929601] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf448 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929678] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:47.282 [2024-10-09 02:09:06.929687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:47.282 [2024-10-09 02:09:06.929700] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:47.282 [2024-10-09 02:09:06.929735] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.929761] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.282 [2024-10-09 02:09:06.929794] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929815] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf470 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929826] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929845] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf498 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929858] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.929898] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.929922] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4c0 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929936] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.929949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.929979] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.929988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.930001] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4e8 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930013] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.282 [2024-10-09 02:09:06.930051] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.282 [2024-10-09 02:09:06.930062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:27:47.282 [2024-10-09 02:09:06.930070] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf510 length 0x10 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930095] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d06c0 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930123] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0080 length 0x40 lkey 0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0xd71d6a8 00:27:47.282 [2024-10-09 02:09:06.930154] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0800 length 0x40 lkey 0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930191] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0940 length 0x40 lkey 0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930220] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.283 [2024-10-09 02:09:06.930229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:47.283 [2024-10-09 02:09:06.930254] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf538 length 0x10 lkey 0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930263] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.283 [2024-10-09 02:09:06.930274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:47.283 [2024-10-09 02:09:06.930288] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf560 length 0x10 lkey 0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930298] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.283 [2024-10-09 02:09:06.930306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:47.283 [2024-10-09 02:09:06.930319] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf588 length 0x10 lkey 0xd71d6a8 00:27:47.283 [2024-10-09 02:09:06.930328] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.283 [2024-10-09 02:09:06.930341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:47.283 [2024-10-09 02:09:06.930358] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5b0 length 0x10 lkey 0xd71d6a8 00:27:47.283 ===================================================== 00:27:47.283 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.283 ===================================================== 00:27:47.283 Controller Capabilities/Features 00:27:47.283 ================================ 00:27:47.283 Vendor ID: 8086 00:27:47.283 Subsystem Vendor ID: 8086 00:27:47.283 Serial Number: SPDK00000000000001 00:27:47.283 Model Number: SPDK bdev Controller 00:27:47.283 Firmware Version: 25.01 00:27:47.283 Recommended Arb Burst: 6 00:27:47.283 IEEE OUI Identifier: e4 d2 5c 00:27:47.283 Multi-path I/O 00:27:47.283 May have multiple subsystem ports: Yes 00:27:47.283 May have multiple controllers: Yes 00:27:47.283 Associated with SR-IOV VF: No 00:27:47.283 Max Data Transfer Size: 131072 00:27:47.283 Max Number of Namespaces: 32 00:27:47.283 Max Number of I/O Queues: 127 00:27:47.283 NVMe Specification Version (VS): 1.3 00:27:47.283 NVMe Specification Version (Identify): 1.3 00:27:47.283 Maximum Queue Entries: 128 00:27:47.283 Contiguous Queues Required: Yes 00:27:47.283 Arbitration Mechanisms Supported 00:27:47.283 Weighted Round Robin: Not Supported 00:27:47.283 Vendor Specific: Not Supported 00:27:47.283 Reset Timeout: 15000 ms 00:27:47.283 Doorbell Stride: 4 bytes 00:27:47.283 NVM Subsystem Reset: Not Supported 00:27:47.283 Command Sets Supported 00:27:47.283 NVM Command Set: Supported 00:27:47.283 Boot Partition: Not Supported 00:27:47.283 Memory Page Size Minimum: 4096 bytes 00:27:47.283 Memory Page Size Maximum: 4096 bytes 00:27:47.283 Persistent Memory Region: Not Supported 00:27:47.283 Optional Asynchronous Events Supported 00:27:47.283 Namespace Attribute Notices: Supported 00:27:47.283 Firmware Activation Notices: Not Supported 00:27:47.283 ANA Change Notices: Not Supported 00:27:47.283 PLE Aggregate Log Change Notices: Not Supported 00:27:47.283 LBA Status Info Alert Notices: Not Supported 00:27:47.283 EGE Aggregate Log Change Notices: Not Supported 00:27:47.283 Normal NVM Subsystem Shutdown event: Not Supported 00:27:47.283 Zone Descriptor Change Notices: Not Supported 00:27:47.283 Discovery Log Change Notices: Not Supported 00:27:47.283 Controller Attributes 00:27:47.283 128-bit Host Identifier: Supported 00:27:47.283 Non-Operational Permissive Mode: Not Supported 00:27:47.283 NVM Sets: Not Supported 00:27:47.283 Read Recovery Levels: Not Supported 00:27:47.283 Endurance Groups: Not Supported 00:27:47.283 Predictable Latency Mode: Not Supported 00:27:47.283 Traffic Based Keep ALive: Not Supported 00:27:47.283 Namespace Granularity: Not Supported 00:27:47.283 SQ Associations: Not Supported 00:27:47.283 UUID List: Not Supported 00:27:47.283 Multi-Domain Subsystem: Not Supported 00:27:47.283 Fixed Capacity Management: Not Supported 00:27:47.283 Variable Capacity Management: Not Supported 00:27:47.283 Delete Endurance Group: Not Supported 00:27:47.283 Delete NVM Set: Not Supported 00:27:47.283 Extended LBA Formats Supported: Not Supported 00:27:47.283 Flexible Data Placement Supported: Not Supported 00:27:47.283 00:27:47.283 Controller Memory Buffer Support 00:27:47.283 ================================ 00:27:47.283 Supported: No 00:27:47.283 00:27:47.283 Persistent Memory Region Support 00:27:47.283 ================================ 00:27:47.283 Supported: No 00:27:47.283 00:27:47.283 Admin Command Set Attributes 00:27:47.283 ============================ 00:27:47.283 Security Send/Receive: Not Supported 00:27:47.283 Format NVM: Not Supported 00:27:47.283 Firmware Activate/Download: Not Supported 00:27:47.283 Namespace Management: Not Supported 00:27:47.283 Device Self-Test: Not Supported 00:27:47.283 Directives: Not Supported 00:27:47.283 NVMe-MI: Not Supported 00:27:47.283 Virtualization Management: Not Supported 00:27:47.283 Doorbell Buffer Config: Not Supported 00:27:47.283 Get LBA Status Capability: Not Supported 00:27:47.283 Command & Feature Lockdown Capability: Not Supported 00:27:47.283 Abort Command Limit: 4 00:27:47.283 Async Event Request Limit: 4 00:27:47.283 Number of Firmware Slots: N/A 00:27:47.283 Firmware Slot 1 Read-Only: N/A 00:27:47.283 Firmware Activation Without Reset: N/A 00:27:47.283 Multiple Update Detection Support: N/A 00:27:47.283 Firmware Update Granularity: No Information Provided 00:27:47.283 Per-Namespace SMART Log: No 00:27:47.283 Asymmetric Namespace Access Log Page: Not Supported 00:27:47.283 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:47.283 Command Effects Log Page: Supported 00:27:47.283 Get Log Page Extended Data: Supported 00:27:47.283 Telemetry Log Pages: Not Supported 00:27:47.283 Persistent Event Log Pages: Not Supported 00:27:47.283 Supported Log Pages Log Page: May Support 00:27:47.283 Commands Supported & Effects Log Page: Not Supported 00:27:47.283 Feature Identifiers & Effects Log Page:May Support 00:27:47.283 NVMe-MI Commands & Effects Log Page: May Support 00:27:47.283 Data Area 4 for Telemetry Log: Not Supported 00:27:47.283 Error Log Page Entries Supported: 128 00:27:47.283 Keep Alive: Supported 00:27:47.283 Keep Alive Granularity: 10000 ms 00:27:47.283 00:27:47.283 NVM Command Set Attributes 00:27:47.283 ========================== 00:27:47.283 Submission Queue Entry Size 00:27:47.283 Max: 64 00:27:47.283 Min: 64 00:27:47.283 Completion Queue Entry Size 00:27:47.283 Max: 16 00:27:47.283 Min: 16 00:27:47.283 Number of Namespaces: 32 00:27:47.283 Compare Command: Supported 00:27:47.283 Write Uncorrectable Command: Not Supported 00:27:47.283 Dataset Management Command: Supported 00:27:47.283 Write Zeroes Command: Supported 00:27:47.283 Set Features Save Field: Not Supported 00:27:47.283 Reservations: Supported 00:27:47.283 Timestamp: Not Supported 00:27:47.283 Copy: Supported 00:27:47.283 Volatile Write Cache: Present 00:27:47.283 Atomic Write Unit (Normal): 1 00:27:47.283 Atomic Write Unit (PFail): 1 00:27:47.283 Atomic Compare & Write Unit: 1 00:27:47.283 Fused Compare & Write: Supported 00:27:47.283 Scatter-Gather List 00:27:47.283 SGL Command Set: Supported 00:27:47.283 SGL Keyed: Supported 00:27:47.283 SGL Bit Bucket Descriptor: Not Supported 00:27:47.283 SGL Metadata Pointer: Not Supported 00:27:47.283 Oversized SGL: Not Supported 00:27:47.283 SGL Metadata Address: Not Supported 00:27:47.283 SGL Offset: Supported 00:27:47.283 Transport SGL Data Block: Not Supported 00:27:47.283 Replay Protected Memory Block: Not Supported 00:27:47.283 00:27:47.283 Firmware Slot Information 00:27:47.283 ========================= 00:27:47.283 Active slot: 1 00:27:47.283 Slot 1 Firmware Revision: 25.01 00:27:47.283 00:27:47.283 00:27:47.283 Commands Supported and Effects 00:27:47.283 ============================== 00:27:47.283 Admin Commands 00:27:47.283 -------------- 00:27:47.283 Get Log Page (02h): Supported 00:27:47.283 Identify (06h): Supported 00:27:47.283 Abort (08h): Supported 00:27:47.283 Set Features (09h): Supported 00:27:47.283 Get Features (0Ah): Supported 00:27:47.283 Asynchronous Event Request (0Ch): Supported 00:27:47.283 Keep Alive (18h): Supported 00:27:47.283 I/O Commands 00:27:47.283 ------------ 00:27:47.283 Flush (00h): Supported LBA-Change 00:27:47.283 Write (01h): Supported LBA-Change 00:27:47.283 Read (02h): Supported 00:27:47.283 Compare (05h): Supported 00:27:47.283 Write Zeroes (08h): Supported LBA-Change 00:27:47.283 Dataset Management (09h): Supported LBA-Change 00:27:47.283 Copy (19h): Supported LBA-Change 00:27:47.283 00:27:47.284 Error Log 00:27:47.284 ========= 00:27:47.284 00:27:47.284 Arbitration 00:27:47.284 =========== 00:27:47.284 Arbitration Burst: 1 00:27:47.284 00:27:47.284 Power Management 00:27:47.284 ================ 00:27:47.284 Number of Power States: 1 00:27:47.284 Current Power State: Power State #0 00:27:47.284 Power State #0: 00:27:47.284 Max Power: 0.00 W 00:27:47.284 Non-Operational State: Operational 00:27:47.284 Entry Latency: Not Reported 00:27:47.284 Exit Latency: Not Reported 00:27:47.284 Relative Read Throughput: 0 00:27:47.284 Relative Read Latency: 0 00:27:47.284 Relative Write Throughput: 0 00:27:47.284 Relative Write Latency: 0 00:27:47.284 Idle Power: Not Reported 00:27:47.284 Active Power: Not Reported 00:27:47.284 Non-Operational Permissive Mode: Not Supported 00:27:47.284 00:27:47.284 Health Information 00:27:47.284 ================== 00:27:47.284 Critical Warnings: 00:27:47.284 Available Spare Space: OK 00:27:47.284 Temperature: OK 00:27:47.284 Device Reliability: OK 00:27:47.284 Read Only: No 00:27:47.284 Volatile Memory Backup: OK 00:27:47.284 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:47.284 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:47.284 Available Spare: 0% 00:27:47.284 Available Spare Threshold: 0% 00:27:47.284 Life Percentage [2024-10-09 02:09:06.930507] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0940 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.930554] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.930564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930575] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5d8 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930626] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:47.284 [2024-10-09 02:09:06.930650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930700] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0580 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.930743] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.930752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930769] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.930796] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930825] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930845] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:47.284 [2024-10-09 02:09:06.930856] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:47.284 [2024-10-09 02:09:06.930865] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930882] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.930930] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.930938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.930949] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930966] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.930980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931006] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931025] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931040] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931077] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931096] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931110] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931151] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931171] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931187] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931230] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931251] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931263] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931303] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931325] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf240 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931339] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931387] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931408] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf268 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931420] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.931455] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.931466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.931475] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf290 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931497] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.931511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.935550] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.935571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.935584] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2b8 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.935602] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0440 length 0x40 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.935617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:47.284 [2024-10-09 02:09:06.935651] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:47.284 [2024-10-09 02:09:06.935665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0004 p:0 m:0 dnr:0 00:27:47.284 [2024-10-09 02:09:06.935673] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e0 length 0x10 lkey 0xd71d6a8 00:27:47.284 [2024-10-09 02:09:06.935686] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:47.284 Used: 0% 00:27:47.284 Data Units Read: 0 00:27:47.284 Data Units Written: 0 00:27:47.284 Host Read Commands: 0 00:27:47.284 Host Write Commands: 0 00:27:47.284 Controller Busy Time: 0 minutes 00:27:47.284 Power Cycles: 0 00:27:47.284 Power On Hours: 0 hours 00:27:47.284 Unsafe Shutdowns: 0 00:27:47.284 Unrecoverable Media Errors: 0 00:27:47.284 Lifetime Error Log Entries: 0 00:27:47.284 Warning Temperature Time: 0 minutes 00:27:47.285 Critical Temperature Time: 0 minutes 00:27:47.285 00:27:47.285 Number of Queues 00:27:47.285 ================ 00:27:47.285 Number of I/O Submission Queues: 127 00:27:47.285 Number of I/O Completion Queues: 127 00:27:47.285 00:27:47.285 Active Namespaces 00:27:47.285 ================= 00:27:47.285 Namespace ID:1 00:27:47.285 Error Recovery Timeout: Unlimited 00:27:47.285 Command Set Identifier: NVM (00h) 00:27:47.285 Deallocate: Supported 00:27:47.285 Deallocated/Unwritten Error: Not Supported 00:27:47.285 Deallocated Read Value: Unknown 00:27:47.285 Deallocate in Write Zeroes: Not Supported 00:27:47.285 Deallocated Guard Field: 0xFFFF 00:27:47.285 Flush: Supported 00:27:47.285 Reservation: Supported 00:27:47.285 Namespace Sharing Capabilities: Multiple Controllers 00:27:47.285 Size (in LBAs): 131072 (0GiB) 00:27:47.285 Capacity (in LBAs): 131072 (0GiB) 00:27:47.285 Utilization (in LBAs): 131072 (0GiB) 00:27:47.285 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:47.285 EUI64: ABCDEF0123456789 00:27:47.285 UUID: b6052dc7-da5e-4c5f-bc6e-bebdfd593982 00:27:47.285 Thin Provisioning: Not Supported 00:27:47.285 Per-NS Atomic Units: Yes 00:27:47.285 Atomic Boundary Size (Normal): 0 00:27:47.285 Atomic Boundary Size (PFail): 0 00:27:47.285 Atomic Boundary Offset: 0 00:27:47.285 Maximum Single Source Range Length: 65535 00:27:47.285 Maximum Copy Length: 65535 00:27:47.285 Maximum Source Range Count: 1 00:27:47.285 NGUID/EUI64 Never Reused: No 00:27:47.285 Namespace Write Protected: No 00:27:47.285 Number of LBA Formats: 1 00:27:47.285 Current LBA Format: LBA Format #00 00:27:47.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:47.285 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.285 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:47.285 rmmod nvme_rdma 00:27:47.285 rmmod nvme_fabrics 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3348237 ']' 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3348237 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3348237 ']' 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3348237 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3348237 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3348237' 00:27:47.544 killing process with pid 3348237 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3348237 00:27:47.544 02:09:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3348237 00:27:48.920 02:09:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:48.920 02:09:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:27:48.920 00:27:48.920 real 0m9.762s 00:27:48.921 user 0m11.770s 00:27:48.921 sys 0m5.607s 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.921 ************************************ 00:27:48.921 END TEST nvmf_identify 00:27:48.921 ************************************ 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.921 02:09:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.179 ************************************ 00:27:49.179 START TEST nvmf_perf 00:27:49.179 ************************************ 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:49.179 * Looking for test storage... 00:27:49.179 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.179 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.180 --rc genhtml_branch_coverage=1 00:27:49.180 --rc genhtml_function_coverage=1 00:27:49.180 --rc genhtml_legend=1 00:27:49.180 --rc geninfo_all_blocks=1 00:27:49.180 --rc geninfo_unexecuted_blocks=1 00:27:49.180 00:27:49.180 ' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.180 --rc genhtml_branch_coverage=1 00:27:49.180 --rc genhtml_function_coverage=1 00:27:49.180 --rc genhtml_legend=1 00:27:49.180 --rc geninfo_all_blocks=1 00:27:49.180 --rc geninfo_unexecuted_blocks=1 00:27:49.180 00:27:49.180 ' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.180 --rc genhtml_branch_coverage=1 00:27:49.180 --rc genhtml_function_coverage=1 00:27:49.180 --rc genhtml_legend=1 00:27:49.180 --rc geninfo_all_blocks=1 00:27:49.180 --rc geninfo_unexecuted_blocks=1 00:27:49.180 00:27:49.180 ' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.180 --rc genhtml_branch_coverage=1 00:27:49.180 --rc genhtml_function_coverage=1 00:27:49.180 --rc genhtml_legend=1 00:27:49.180 --rc geninfo_all_blocks=1 00:27:49.180 --rc geninfo_unexecuted_blocks=1 00:27:49.180 00:27:49.180 ' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:49.180 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.180 02:09:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:27:55.744 Found 0000:18:00.0 (0x8086 - 0x159b) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:27:55.744 Found 0000:18:00.1 (0x8086 - 0x159b) 00:27:55.744 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@403 -- # modinfo irdma 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:27:55.745 Found net devices under 0000:18:00.0: cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:27:55.745 Found net devices under 0000:18:00.1: cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # rdma_device_init 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:27:55.745 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:55.745 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:27:55.745 altname enp24s0f0np0 00:27:55.745 altname ens785f0np0 00:27:55.745 inet 192.168.100.8/24 scope global cvl_0_0 00:27:55.745 valid_lft forever preferred_lft forever 00:27:55.745 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:27:55.745 valid_lft forever preferred_lft forever 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:27:55.745 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:55.745 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:27:55.745 altname enp24s0f1np1 00:27:55.745 altname ens785f1np1 00:27:55.745 inet 192.168.100.9/24 scope global cvl_0_1 00:27:55.745 valid_lft forever preferred_lft forever 00:27:55.745 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:27:55.745 valid_lft forever preferred_lft forever 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.745 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:27:55.745 192.168.100.9' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # head -n 1 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:27:55.746 192.168.100.9' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:27:55.746 192.168.100.9' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # tail -n +2 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # head -n 1 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3351660 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3351660 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3351660 ']' 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:55.746 02:09:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:56.004 [2024-10-09 02:09:15.563448] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:27:56.004 [2024-10-09 02:09:15.563590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.004 [2024-10-09 02:09:15.690823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.267 [2024-10-09 02:09:15.893667] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.267 [2024-10-09 02:09:15.893723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.267 [2024-10-09 02:09:15.893737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.267 [2024-10-09 02:09:15.893758] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.267 [2024-10-09 02:09:15.893769] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.267 [2024-10-09 02:09:15.896044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.267 [2024-10-09 02:09:15.896112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.267 [2024-10-09 02:09:15.896172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.267 [2024-10-09 02:09:15.896179] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:56.838 02:09:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:00.124 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:00.124 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:00.124 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:28:00.124 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:00.383 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:00.383 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:28:00.383 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:00.383 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:28:00.383 02:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:28:00.383 [2024-10-09 02:09:20.158994] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:28:00.383 [2024-10-09 02:09:20.176566] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x61200002a4c0/0x617000007c40) succeed. 00:28:00.383 [2024-10-09 02:09:20.186586] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x61200002a640/0x617000007fc0) succeed. 00:28:00.383 [2024-10-09 02:09:20.186625] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:00.641 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.641 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:00.641 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.900 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:00.900 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:01.159 02:09:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:01.417 [2024-10-09 02:09:21.057206] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:01.417 02:09:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:01.676 02:09:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:28:01.676 02:09:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:01.676 02:09:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:01.676 02:09:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:03.051 Initializing NVMe Controllers 00:28:03.051 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:28:03.051 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:28:03.051 Initialization complete. Launching workers. 00:28:03.051 ======================================================== 00:28:03.051 Latency(us) 00:28:03.051 Device Information : IOPS MiB/s Average min max 00:28:03.051 PCIE (0000:5e:00.0) NSID 1 from core 0: 89515.62 349.67 356.78 42.89 8249.23 00:28:03.051 ======================================================== 00:28:03.051 Total : 89515.62 349.67 356.78 42.89 8249.23 00:28:03.051 00:28:03.051 02:09:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:06.334 Initializing NVMe Controllers 00:28:06.334 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.334 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.334 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.334 Initialization complete. Launching workers. 00:28:06.334 ======================================================== 00:28:06.334 Latency(us) 00:28:06.334 Device Information : IOPS MiB/s Average min max 00:28:06.334 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5704.99 22.29 175.03 63.57 5026.66 00:28:06.334 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4529.00 17.69 220.56 86.51 5056.96 00:28:06.334 ======================================================== 00:28:06.334 Total : 10233.99 39.98 195.18 63.57 5056.96 00:28:06.334 00:28:06.593 02:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:10.777 Initializing NVMe Controllers 00:28:10.777 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.777 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:10.777 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:10.777 Initialization complete. Launching workers. 00:28:10.777 ======================================================== 00:28:10.777 Latency(us) 00:28:10.777 Device Information : IOPS MiB/s Average min max 00:28:10.777 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15699.71 61.33 2037.59 569.38 9215.48 00:28:10.777 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.85 15.69 7962.68 4778.85 10048.53 00:28:10.777 ======================================================== 00:28:10.777 Total : 19717.55 77.02 3244.95 569.38 10048.53 00:28:10.777 00:28:10.777 02:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:10.777 02:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ rdma == \r\d\m\a ]] 00:28:10.777 02:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:28:10.777 No valid NVMe controllers or AIO or URING devices found 00:28:10.777 Initializing NVMe Controllers 00:28:10.777 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.777 Controller IO queue size 128, less than required. 00:28:10.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:10.777 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:10.777 Controller IO queue size 128, less than required. 00:28:10.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:10.777 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:10.777 WARNING: Some requested NVMe devices were skipped 00:28:10.777 02:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:28:16.185 Initializing NVMe Controllers 00:28:16.185 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.185 Controller IO queue size 128, less than required. 00:28:16.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.185 Controller IO queue size 128, less than required. 00:28:16.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.185 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.185 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:16.185 Initialization complete. Launching workers. 00:28:16.185 00:28:16.185 ==================== 00:28:16.186 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:16.186 RDMA transport: 00:28:16.186 dev name: rocep24s0f0 00:28:16.186 polls: 253643 00:28:16.186 idle_polls: 248832 00:28:16.186 completions: 35734 00:28:16.186 queued_requests: 1 00:28:16.186 total_send_wrs: 17867 00:28:16.186 send_doorbell_updates: 4317 00:28:16.186 total_recv_wrs: 17994 00:28:16.186 recv_doorbell_updates: 4319 00:28:16.186 --------------------------------- 00:28:16.186 00:28:16.186 ==================== 00:28:16.186 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:16.186 RDMA transport: 00:28:16.186 dev name: rocep24s0f0 00:28:16.186 polls: 252473 00:28:16.186 idle_polls: 245690 00:28:16.186 completions: 42322 00:28:16.186 queued_requests: 1 00:28:16.186 total_send_wrs: 21161 00:28:16.186 send_doorbell_updates: 5905 00:28:16.186 total_recv_wrs: 21288 00:28:16.186 recv_doorbell_updates: 5906 00:28:16.186 --------------------------------- 00:28:16.186 ======================================================== 00:28:16.186 Latency(us) 00:28:16.186 Device Information : IOPS MiB/s Average min max 00:28:16.186 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4465.81 1116.45 29088.27 22809.61 322366.30 00:28:16.186 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5289.18 1322.30 25052.43 16636.66 387476.65 00:28:16.186 ======================================================== 00:28:16.186 Total : 9754.99 2438.75 26900.03 16636.66 387476.65 00:28:16.186 00:28:16.186 02:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:16.186 02:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.186 02:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:16.186 02:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:28:16.187 02:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=5216ac46-f006-47f2-940a-e27c7097d776 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 5216ac46-f006-47f2-940a-e27c7097d776 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5216ac46-f006-47f2-940a-e27c7097d776 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:28.382 { 00:28:28.382 "uuid": "5216ac46-f006-47f2-940a-e27c7097d776", 00:28:28.382 "name": "lvs_0", 00:28:28.382 "base_bdev": "Nvme0n1", 00:28:28.382 "total_data_clusters": 952929, 00:28:28.382 "free_clusters": 952929, 00:28:28.382 "block_size": 512, 00:28:28.382 "cluster_size": 4194304 00:28:28.382 } 00:28:28.382 ]' 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5216ac46-f006-47f2-940a-e27c7097d776") .free_clusters' 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=952929 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5216ac46-f006-47f2-940a-e27c7097d776") .cluster_size' 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=3811716 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 3811716 00:28:28.382 3811716 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 3811716 -gt 20480 ']' 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:28.382 02:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5216ac46-f006-47f2-940a-e27c7097d776 lbd_0 20480 00:28:29.756 02:09:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=237cd8fd-7b86-4d24-bc9e-df62adbeef9b 00:28:29.756 02:09:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 237cd8fd-7b86-4d24-bc9e-df62adbeef9b lvs_n_0 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8100d5ab-b82a-4374-8f24-011d2b7e7eae 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8100d5ab-b82a-4374-8f24-011d2b7e7eae 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=8100d5ab-b82a-4374-8f24-011d2b7e7eae 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:31.130 02:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:31.390 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:31.391 { 00:28:31.391 "uuid": "5216ac46-f006-47f2-940a-e27c7097d776", 00:28:31.391 "name": "lvs_0", 00:28:31.391 "base_bdev": "Nvme0n1", 00:28:31.391 "total_data_clusters": 952929, 00:28:31.391 "free_clusters": 947809, 00:28:31.391 "block_size": 512, 00:28:31.391 "cluster_size": 4194304 00:28:31.391 }, 00:28:31.391 { 00:28:31.391 "uuid": "8100d5ab-b82a-4374-8f24-011d2b7e7eae", 00:28:31.391 "name": "lvs_n_0", 00:28:31.391 "base_bdev": "237cd8fd-7b86-4d24-bc9e-df62adbeef9b", 00:28:31.391 "total_data_clusters": 5114, 00:28:31.391 "free_clusters": 5114, 00:28:31.391 "block_size": 512, 00:28:31.391 "cluster_size": 4194304 00:28:31.391 } 00:28:31.391 ]' 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8100d5ab-b82a-4374-8f24-011d2b7e7eae") .free_clusters' 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8100d5ab-b82a-4374-8f24-011d2b7e7eae") .cluster_size' 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:31.391 20456 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:31.391 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8100d5ab-b82a-4374-8f24-011d2b7e7eae lbd_nest_0 20456 00:28:31.650 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=d408bdce-3bab-4515-bfc8-60296d4880ff 00:28:31.650 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.908 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:31.908 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d408bdce-3bab-4515-bfc8-60296d4880ff 00:28:32.166 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:32.424 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:32.424 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:32.424 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:32.424 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:32.424 02:09:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:44.623 Initializing NVMe Controllers 00:28:44.623 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.623 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.623 Initialization complete. Launching workers. 00:28:44.623 ======================================================== 00:28:44.623 Latency(us) 00:28:44.624 Device Information : IOPS MiB/s Average min max 00:28:44.624 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4673.90 2.28 213.53 86.13 7108.92 00:28:44.624 ======================================================== 00:28:44.624 Total : 4673.90 2.28 213.53 86.13 7108.92 00:28:44.624 00:28:44.624 02:10:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:44.624 02:10:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:56.822 Initializing NVMe Controllers 00:28:56.822 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.822 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.822 Initialization complete. Launching workers. 00:28:56.822 ======================================================== 00:28:56.822 Latency(us) 00:28:56.822 Device Information : IOPS MiB/s Average min max 00:28:56.822 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 125.40 15.68 7978.28 3990.20 11970.67 00:28:56.822 ======================================================== 00:28:56.822 Total : 125.40 15.68 7978.28 3990.20 11970.67 00:28:56.822 00:28:56.822 02:10:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:56.822 02:10:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:56.822 02:10:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:06.790 Initializing NVMe Controllers 00:29:06.791 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.791 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.791 Initialization complete. Launching workers. 00:29:06.791 ======================================================== 00:29:06.791 Latency(us) 00:29:06.791 Device Information : IOPS MiB/s Average min max 00:29:06.791 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9777.47 4.77 3272.52 975.93 10428.27 00:29:06.791 ======================================================== 00:29:06.791 Total : 9777.47 4.77 3272.52 975.93 10428.27 00:29:06.791 00:29:06.791 02:10:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.791 02:10:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:18.995 Initializing NVMe Controllers 00:29:18.995 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.995 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.995 Initialization complete. Launching workers. 00:29:18.995 ======================================================== 00:29:18.995 Latency(us) 00:29:18.995 Device Information : IOPS MiB/s Average min max 00:29:18.995 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8316.06 1039.51 3849.09 680.73 19865.81 00:29:18.995 ======================================================== 00:29:18.995 Total : 8316.06 1039.51 3849.09 680.73 19865.81 00:29:18.995 00:29:18.995 02:10:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:18.995 02:10:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:18.995 02:10:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:31.198 Initializing NVMe Controllers 00:29:31.198 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.198 Controller IO queue size 128, less than required. 00:29:31.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.198 Initialization complete. Launching workers. 00:29:31.198 ======================================================== 00:29:31.198 Latency(us) 00:29:31.198 Device Information : IOPS MiB/s Average min max 00:29:31.198 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16176.80 7.90 7915.72 2430.07 16934.52 00:29:31.198 ======================================================== 00:29:31.198 Total : 16176.80 7.90 7915.72 2430.07 16934.52 00:29:31.198 00:29:31.198 02:10:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.198 02:10:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:41.170 Initializing NVMe Controllers 00:29:41.170 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.170 Controller IO queue size 128, less than required. 00:29:41.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.170 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.170 Initialization complete. Launching workers. 00:29:41.170 ======================================================== 00:29:41.170 Latency(us) 00:29:41.170 Device Information : IOPS MiB/s Average min max 00:29:41.170 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5169.74 646.22 24759.69 7977.37 93630.94 00:29:41.170 ======================================================== 00:29:41.170 Total : 5169.74 646.22 24759.69 7977.37 93630.94 00:29:41.170 00:29:41.428 02:11:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.687 02:11:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d408bdce-3bab-4515-bfc8-60296d4880ff 00:29:42.622 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:42.622 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 237cd8fd-7b86-4d24-bc9e-df62adbeef9b 00:29:42.880 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:43.141 rmmod nvme_rdma 00:29:43.141 rmmod nvme_fabrics 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3351660 ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3351660 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3351660 ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3351660 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3351660 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3351660' 00:29:43.141 killing process with pid 3351660 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3351660 00:29:43.141 02:11:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3351660 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:29:48.417 00:29:48.417 real 1m59.089s 00:29:48.417 user 7m29.546s 00:29:48.417 sys 0m8.278s 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:48.417 ************************************ 00:29:48.417 END TEST nvmf_perf 00:29:48.417 ************************************ 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.417 ************************************ 00:29:48.417 START TEST nvmf_fio_host 00:29:48.417 ************************************ 00:29:48.417 02:11:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:48.417 * Looking for test storage... 00:29:48.417 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:48.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.417 --rc genhtml_branch_coverage=1 00:29:48.417 --rc genhtml_function_coverage=1 00:29:48.417 --rc genhtml_legend=1 00:29:48.417 --rc geninfo_all_blocks=1 00:29:48.417 --rc geninfo_unexecuted_blocks=1 00:29:48.417 00:29:48.417 ' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:48.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.417 --rc genhtml_branch_coverage=1 00:29:48.417 --rc genhtml_function_coverage=1 00:29:48.417 --rc genhtml_legend=1 00:29:48.417 --rc geninfo_all_blocks=1 00:29:48.417 --rc geninfo_unexecuted_blocks=1 00:29:48.417 00:29:48.417 ' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:48.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.417 --rc genhtml_branch_coverage=1 00:29:48.417 --rc genhtml_function_coverage=1 00:29:48.417 --rc genhtml_legend=1 00:29:48.417 --rc geninfo_all_blocks=1 00:29:48.417 --rc geninfo_unexecuted_blocks=1 00:29:48.417 00:29:48.417 ' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:48.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.417 --rc genhtml_branch_coverage=1 00:29:48.417 --rc genhtml_function_coverage=1 00:29:48.417 --rc genhtml_legend=1 00:29:48.417 --rc geninfo_all_blocks=1 00:29:48.417 --rc geninfo_unexecuted_blocks=1 00:29:48.417 00:29:48.417 ' 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.417 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.418 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.418 02:11:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:29:54.990 Found 0000:18:00.0 (0x8086 - 0x159b) 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.990 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:29:54.991 Found 0000:18:00.1 (0x8086 - 0x159b) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@403 -- # modinfo irdma 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:29:54.991 Found net devices under 0000:18:00.0: cvl_0_0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:29:54.991 Found net devices under 0000:18:00.1: cvl_0_1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # rdma_device_init 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:29:54.991 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:54.991 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:29:54.991 altname enp24s0f0np0 00:29:54.991 altname ens785f0np0 00:29:54.991 inet 192.168.100.8/24 scope global cvl_0_0 00:29:54.991 valid_lft forever preferred_lft forever 00:29:54.991 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:29:54.991 valid_lft forever preferred_lft forever 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:29:54.991 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:54.991 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:29:54.991 altname enp24s0f1np1 00:29:54.991 altname ens785f1np1 00:29:54.991 inet 192.168.100.9/24 scope global cvl_0_1 00:29:54.991 valid_lft forever preferred_lft forever 00:29:54.991 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:29:54.991 valid_lft forever preferred_lft forever 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:54.991 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:29:54.992 192.168.100.9' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:29:54.992 192.168.100.9' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # head -n 1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:29:54.992 192.168.100.9' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # tail -n +2 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # head -n 1 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3368762 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3368762 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3368762 ']' 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.992 02:11:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.992 [2024-10-09 02:11:14.494658] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:29:54.992 [2024-10-09 02:11:14.494768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.992 [2024-10-09 02:11:14.627679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:55.251 [2024-10-09 02:11:14.820546] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.251 [2024-10-09 02:11:14.820615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.251 [2024-10-09 02:11:14.820628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.251 [2024-10-09 02:11:14.820646] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.251 [2024-10-09 02:11:14.820655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.251 [2024-10-09 02:11:14.822882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.251 [2024-10-09 02:11:14.822948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.251 [2024-10-09 02:11:14.823007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.251 [2024-10-09 02:11:14.823014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.510 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.510 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:55.510 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:55.769 [2024-10-09 02:11:15.500830] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x6120000292c0/0x617000007c40) succeed. 00:29:55.769 [2024-10-09 02:11:15.510579] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029440/0x617000007fc0) succeed. 00:29:55.769 [2024-10-09 02:11:15.510615] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:55.769 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:55.769 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:55.769 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.029 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:56.029 Malloc1 00:29:56.287 02:11:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.287 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:56.545 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:56.803 [2024-10-09 02:11:16.440990] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:56.803 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:57.063 02:11:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:57.322 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:57.322 fio-3.35 00:29:57.322 Starting 1 thread 00:29:59.856 00:29:59.856 test: (groupid=0, jobs=1): err= 0: pid=3369264: Wed Oct 9 02:11:19 2024 00:29:59.856 read: IOPS=15.1k, BW=59.1MiB/s (61.9MB/s)(118MiB/2004msec) 00:29:59.856 slat (nsec): min=1524, max=34623, avg=1697.47, stdev=481.29 00:29:59.856 clat (usec): min=2822, max=7661, avg=4209.27, stdev=101.23 00:29:59.856 lat (usec): min=2847, max=7663, avg=4210.97, stdev=101.19 00:29:59.856 clat percentiles (usec): 00:29:59.856 | 1.00th=[ 4146], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4178], 00:29:59.856 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4228], 00:29:59.856 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4228], 00:29:59.856 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5473], 99.95th=[ 6587], 00:29:59.856 | 99.99th=[ 7635] 00:29:59.856 bw ( KiB/s): min=59480, max=61368, per=100.00%, avg=60478.00, stdev=877.51, samples=4 00:29:59.856 iops : min=14870, max=15342, avg=15119.50, stdev=219.38, samples=4 00:29:59.856 write: IOPS=15.1k, BW=59.1MiB/s (62.0MB/s)(119MiB/2004msec); 0 zone resets 00:29:59.856 slat (nsec): min=1571, max=18435, avg=1994.20, stdev=516.70 00:29:59.856 clat (usec): min=3017, max=7669, avg=4207.94, stdev=111.22 00:29:59.856 lat (usec): min=3028, max=7671, avg=4209.93, stdev=111.18 00:29:59.856 clat percentiles (usec): 00:29:59.856 | 1.00th=[ 4146], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4178], 00:29:59.856 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4228], 00:29:59.856 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4228], 00:29:59.856 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 6063], 99.95th=[ 7046], 00:29:59.856 | 99.99th=[ 7635] 00:29:59.856 bw ( KiB/s): min=59768, max=61464, per=99.98%, avg=60536.00, stdev=702.91, samples=4 00:29:59.856 iops : min=14942, max=15366, avg=15134.00, stdev=175.73, samples=4 00:29:59.856 lat (msec) : 4=0.66%, 10=99.34% 00:29:59.856 cpu : usr=99.30%, sys=0.30%, ctx=8, majf=0, minf=1324 00:29:59.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:59.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.856 issued rwts: total=30298,30336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.856 00:29:59.856 Run status group 0 (all jobs): 00:29:59.856 READ: bw=59.1MiB/s (61.9MB/s), 59.1MiB/s-59.1MiB/s (61.9MB/s-61.9MB/s), io=118MiB (124MB), run=2004-2004msec 00:29:59.856 WRITE: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=119MiB (124MB), run=2004-2004msec 00:29:59.856 ----------------------------------------------------- 00:29:59.856 Suppressions used: 00:29:59.856 count bytes template 00:29:59.856 1 63 /usr/src/fio/parse.c 00:29:59.856 1 8 libtcmalloc_minimal.so 00:29:59.856 ----------------------------------------------------- 00:29:59.856 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:59.856 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:00.114 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:00.114 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:00.114 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:30:00.114 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:00.114 02:11:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:30:00.372 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:00.372 fio-3.35 00:30:00.372 Starting 1 thread 00:30:02.900 00:30:02.900 test: (groupid=0, jobs=1): err= 0: pid=3369690: Wed Oct 9 02:11:22 2024 00:30:02.900 read: IOPS=12.1k, BW=189MiB/s (199MB/s)(374MiB/1975msec) 00:30:02.900 slat (nsec): min=2543, max=44379, avg=2953.10, stdev=1125.11 00:30:02.900 clat (usec): min=684, max=9106, avg=2194.16, stdev=1471.65 00:30:02.900 lat (usec): min=687, max=9109, avg=2197.11, stdev=1472.00 00:30:02.900 clat percentiles (usec): 00:30:02.900 | 1.00th=[ 947], 5.00th=[ 1106], 10.00th=[ 1205], 20.00th=[ 1319], 00:30:02.900 | 30.00th=[ 1450], 40.00th=[ 1565], 50.00th=[ 1680], 60.00th=[ 1811], 00:30:02.900 | 70.00th=[ 2073], 80.00th=[ 2409], 90.00th=[ 4686], 95.00th=[ 5866], 00:30:02.900 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 8848], 00:30:02.900 | 99.99th=[ 8979] 00:30:02.900 bw ( KiB/s): min=92103, max=97696, per=48.73%, avg=94489.75, stdev=2436.21, samples=4 00:30:02.900 iops : min= 5756, max= 6106, avg=5905.50, stdev=152.41, samples=4 00:30:02.900 write: IOPS=6834, BW=107MiB/s (112MB/s)(191MiB/1793msec); 0 zone resets 00:30:02.900 slat (usec): min=27, max=185, avg=30.44, stdev= 4.37 00:30:02.900 clat (usec): min=5084, max=22886, avg=14453.41, stdev=2234.50 00:30:02.900 lat (usec): min=5117, max=22915, avg=14483.84, stdev=2234.24 00:30:02.900 clat percentiles (usec): 00:30:02.900 | 1.00th=[ 8586], 5.00th=[11600], 10.00th=[12256], 20.00th=[12780], 00:30:02.900 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14091], 60.00th=[14615], 00:30:02.900 | 70.00th=[15401], 80.00th=[16319], 90.00th=[17433], 95.00th=[18482], 00:30:02.900 | 99.00th=[20055], 99.50th=[20841], 99.90th=[22676], 99.95th=[22676], 00:30:02.900 | 99.99th=[22938] 00:30:02.900 bw ( KiB/s): min=93285, max=98528, per=88.83%, avg=97145.25, stdev=2574.43, samples=4 00:30:02.900 iops : min= 5830, max= 6158, avg=6071.50, stdev=161.06, samples=4 00:30:02.900 lat (usec) : 750=0.01%, 1000=1.21% 00:30:02.900 lat (msec) : 2=43.50%, 4=13.90%, 10=8.16%, 20=32.81%, 50=0.41% 00:30:02.900 cpu : usr=95.21%, sys=4.19%, ctx=87, majf=0, minf=12571 00:30:02.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:02.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.900 issued rwts: total=23934,12255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.900 00:30:02.900 Run status group 0 (all jobs): 00:30:02.900 READ: bw=189MiB/s (199MB/s), 189MiB/s-189MiB/s (199MB/s-199MB/s), io=374MiB (392MB), run=1975-1975msec 00:30:02.900 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=191MiB (201MB), run=1793-1793msec 00:30:02.900 ----------------------------------------------------- 00:30:02.900 Suppressions used: 00:30:02.900 count bytes template 00:30:02.900 1 63 /usr/src/fio/parse.c 00:30:02.900 138 13248 /usr/src/fio/iolog.c 00:30:02.900 1 8 libtcmalloc_minimal.so 00:30:02.900 ----------------------------------------------------- 00:30:02.900 00:30:02.900 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:03.158 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:03.416 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:03.416 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:30:03.416 02:11:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 192.168.100.8 00:30:06.695 Nvme0n1 00:30:06.695 02:11:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d2d66224-9182-45b9-8479-a012469122be 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d2d66224-9182-45b9-8479-a012469122be 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d2d66224-9182-45b9-8479-a012469122be 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:18.890 02:11:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:18.890 { 00:30:18.890 "uuid": "d2d66224-9182-45b9-8479-a012469122be", 00:30:18.890 "name": "lvs_0", 00:30:18.890 "base_bdev": "Nvme0n1", 00:30:18.890 "total_data_clusters": 3725, 00:30:18.890 "free_clusters": 3725, 00:30:18.890 "block_size": 512, 00:30:18.890 "cluster_size": 1073741824 00:30:18.890 } 00:30:18.890 ]' 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d2d66224-9182-45b9-8479-a012469122be") .free_clusters' 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=3725 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d2d66224-9182-45b9-8479-a012469122be") .cluster_size' 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=3814400 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 3814400 00:30:18.890 3814400 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 3814400 00:30:18.890 3b5cf8d6-5d95-4c03-8239-1efd4dc219a0 00:30:18.890 02:11:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:18.890 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:18.891 02:11:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:19.149 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:19.149 fio-3.35 00:30:19.149 Starting 1 thread 00:30:21.678 00:30:21.678 test: (groupid=0, jobs=1): err= 0: pid=3372169: Wed Oct 9 02:11:41 2024 00:30:21.678 read: IOPS=5454, BW=21.3MiB/s (22.3MB/s)(42.7MiB/2006msec) 00:30:21.678 slat (nsec): min=1531, max=33498, avg=1763.40, stdev=542.49 00:30:21.678 clat (usec): min=203, max=898473, avg=11721.48, stdev=68882.95 00:30:21.678 lat (usec): min=204, max=898492, avg=11723.24, stdev=68883.08 00:30:21.678 clat percentiles (msec): 00:30:21.678 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:30:21.678 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:30:21.678 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:30:21.678 | 99.00th=[ 8], 99.50th=[ 894], 99.90th=[ 902], 99.95th=[ 902], 00:30:21.678 | 99.99th=[ 902] 00:30:21.678 bw ( KiB/s): min= 384, max=40360, per=99.65%, avg=21742.00, stdev=21432.36, samples=4 00:30:21.678 iops : min= 96, max=10090, avg=5435.50, stdev=5358.09, samples=4 00:30:21.678 write: IOPS=5431, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2006msec); 0 zone resets 00:30:21.678 slat (nsec): min=1584, max=18353, avg=2219.65, stdev=539.52 00:30:21.678 clat (usec): min=401, max=898978, avg=11464.71, stdev=66916.77 00:30:21.678 lat (usec): min=403, max=898983, avg=11466.93, stdev=66916.87 00:30:21.678 clat percentiles (msec): 00:30:21.678 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:30:21.678 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:30:21.678 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:30:21.678 | 99.00th=[ 8], 99.50th=[ 894], 99.90th=[ 902], 99.95th=[ 902], 00:30:21.678 | 99.99th=[ 902] 00:30:21.678 bw ( KiB/s): min= 416, max=40024, per=99.78%, avg=21678.00, stdev=21149.45, samples=4 00:30:21.678 iops : min= 104, max=10006, avg=5419.50, stdev=5287.36, samples=4 00:30:21.678 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:30:21.678 lat (msec) : 2=0.06%, 4=0.32%, 10=98.86%, 20=0.14%, 1000=0.59% 00:30:21.678 cpu : usr=99.40%, sys=0.25%, ctx=8, majf=0, minf=1348 00:30:21.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.678 issued rwts: total=10942,10895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.678 00:30:21.678 Run status group 0 (all jobs): 00:30:21.678 READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.7MiB (44.8MB), run=2006-2006msec 00:30:21.678 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.6MB), run=2006-2006msec 00:30:21.936 ----------------------------------------------------- 00:30:21.936 Suppressions used: 00:30:21.936 count bytes template 00:30:21.936 1 64 /usr/src/fio/parse.c 00:30:21.936 1 8 libtcmalloc_minimal.so 00:30:21.936 ----------------------------------------------------- 00:30:21.936 00:30:21.936 02:11:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:22.195 02:11:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=94d06475-96fc-4cfe-9897-eccb59faa64d 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 94d06475-96fc-4cfe-9897-eccb59faa64d 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=94d06475-96fc-4cfe-9897-eccb59faa64d 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:24.723 { 00:30:24.723 "uuid": "d2d66224-9182-45b9-8479-a012469122be", 00:30:24.723 "name": "lvs_0", 00:30:24.723 "base_bdev": "Nvme0n1", 00:30:24.723 "total_data_clusters": 3725, 00:30:24.723 "free_clusters": 0, 00:30:24.723 "block_size": 512, 00:30:24.723 "cluster_size": 1073741824 00:30:24.723 }, 00:30:24.723 { 00:30:24.723 "uuid": "94d06475-96fc-4cfe-9897-eccb59faa64d", 00:30:24.723 "name": "lvs_n_0", 00:30:24.723 "base_bdev": "3b5cf8d6-5d95-4c03-8239-1efd4dc219a0", 00:30:24.723 "total_data_clusters": 952668, 00:30:24.723 "free_clusters": 952668, 00:30:24.723 "block_size": 512, 00:30:24.723 "cluster_size": 4194304 00:30:24.723 } 00:30:24.723 ]' 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="94d06475-96fc-4cfe-9897-eccb59faa64d") .free_clusters' 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=952668 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="94d06475-96fc-4cfe-9897-eccb59faa64d") .cluster_size' 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=3810672 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 3810672 00:30:24.723 3810672 00:30:24.723 02:11:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 3810672 00:30:32.832 feebfc89-8aa1-420f-ad04-90950610bfef 00:30:32.832 02:11:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:33.090 02:11:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:33.348 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:33.607 02:11:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:33.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:33.866 fio-3.35 00:30:33.866 Starting 1 thread 00:30:36.400 00:30:36.400 test: (groupid=0, jobs=1): err= 0: pid=3374081: Wed Oct 9 02:11:56 2024 00:30:36.400 read: IOPS=8437, BW=33.0MiB/s (34.6MB/s)(66.2MiB/2007msec) 00:30:36.400 slat (nsec): min=1528, max=32664, avg=1734.49, stdev=390.56 00:30:36.400 clat (usec): min=4150, max=12997, avg=7448.82, stdev=226.50 00:30:36.401 lat (usec): min=4153, max=12998, avg=7450.56, stdev=226.45 00:30:36.401 clat percentiles (usec): 00:30:36.401 | 1.00th=[ 7308], 5.00th=[ 7373], 10.00th=[ 7373], 20.00th=[ 7439], 00:30:36.401 | 30.00th=[ 7439], 40.00th=[ 7439], 50.00th=[ 7439], 60.00th=[ 7439], 00:30:36.401 | 70.00th=[ 7439], 80.00th=[ 7504], 90.00th=[ 7504], 95.00th=[ 7504], 00:30:36.401 | 99.00th=[ 7701], 99.50th=[ 8160], 99.90th=[11076], 99.95th=[11994], 00:30:36.401 | 99.99th=[13042] 00:30:36.401 bw ( KiB/s): min=31496, max=34664, per=99.98%, avg=33744.00, stdev=1513.83, samples=4 00:30:36.401 iops : min= 7874, max= 8666, avg=8436.00, stdev=378.46, samples=4 00:30:36.401 write: IOPS=8431, BW=32.9MiB/s (34.5MB/s)(66.1MiB/2007msec); 0 zone resets 00:30:36.401 slat (nsec): min=1571, max=14036, avg=2023.20, stdev=347.51 00:30:36.401 clat (usec): min=4159, max=13004, avg=7471.56, stdev=240.05 00:30:36.401 lat (usec): min=4164, max=13006, avg=7473.58, stdev=240.02 00:30:36.401 clat percentiles (usec): 00:30:36.401 | 1.00th=[ 7373], 5.00th=[ 7373], 10.00th=[ 7439], 20.00th=[ 7439], 00:30:36.401 | 30.00th=[ 7439], 40.00th=[ 7439], 50.00th=[ 7439], 60.00th=[ 7504], 00:30:36.401 | 70.00th=[ 7504], 80.00th=[ 7504], 90.00th=[ 7504], 95.00th=[ 7570], 00:30:36.401 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[11994], 99.95th=[12125], 00:30:36.401 | 99.99th=[13042] 00:30:36.401 bw ( KiB/s): min=32256, max=34296, per=99.97%, avg=33718.00, stdev=977.56, samples=4 00:30:36.401 iops : min= 8064, max= 8574, avg=8429.50, stdev=244.39, samples=4 00:30:36.401 lat (msec) : 10=99.81%, 20=0.19% 00:30:36.401 cpu : usr=99.20%, sys=0.45%, ctx=8, majf=0, minf=1879 00:30:36.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:36.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.401 issued rwts: total=16935,16923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.401 00:30:36.401 Run status group 0 (all jobs): 00:30:36.401 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.2MiB (69.4MB), run=2007-2007msec 00:30:36.401 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.1MiB (69.3MB), run=2007-2007msec 00:30:36.660 ----------------------------------------------------- 00:30:36.660 Suppressions used: 00:30:36.660 count bytes template 00:30:36.660 1 64 /usr/src/fio/parse.c 00:30:36.660 1 8 libtcmalloc_minimal.so 00:30:36.660 ----------------------------------------------------- 00:30:36.660 00:30:36.660 02:11:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:36.919 02:11:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:36.919 02:11:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:58.968 02:12:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:58.968 02:12:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:11.157 02:12:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:11.157 02:12:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:15.337 rmmod nvme_rdma 00:31:15.337 rmmod nvme_fabrics 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3368762 ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3368762 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3368762 ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3368762 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3368762 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3368762' 00:31:15.337 killing process with pid 3368762 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3368762 00:31:15.337 02:12:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3368762 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:31:16.271 00:31:16.271 real 1m27.991s 00:31:16.271 user 5m38.044s 00:31:16.271 sys 0m18.508s 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.271 ************************************ 00:31:16.271 END TEST nvmf_fio_host 00:31:16.271 ************************************ 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.271 ************************************ 00:31:16.271 START TEST nvmf_failover 00:31:16.271 ************************************ 00:31:16.271 02:12:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:31:16.529 * Looking for test storage... 00:31:16.529 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.529 --rc genhtml_branch_coverage=1 00:31:16.529 --rc genhtml_function_coverage=1 00:31:16.529 --rc genhtml_legend=1 00:31:16.529 --rc geninfo_all_blocks=1 00:31:16.529 --rc geninfo_unexecuted_blocks=1 00:31:16.529 00:31:16.529 ' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.529 --rc genhtml_branch_coverage=1 00:31:16.529 --rc genhtml_function_coverage=1 00:31:16.529 --rc genhtml_legend=1 00:31:16.529 --rc geninfo_all_blocks=1 00:31:16.529 --rc geninfo_unexecuted_blocks=1 00:31:16.529 00:31:16.529 ' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.529 --rc genhtml_branch_coverage=1 00:31:16.529 --rc genhtml_function_coverage=1 00:31:16.529 --rc genhtml_legend=1 00:31:16.529 --rc geninfo_all_blocks=1 00:31:16.529 --rc geninfo_unexecuted_blocks=1 00:31:16.529 00:31:16.529 ' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.529 --rc genhtml_branch_coverage=1 00:31:16.529 --rc genhtml_function_coverage=1 00:31:16.529 --rc genhtml_legend=1 00:31:16.529 --rc geninfo_all_blocks=1 00:31:16.529 --rc geninfo_unexecuted_blocks=1 00:31:16.529 00:31:16.529 ' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.529 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:16.530 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.530 02:12:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:31:23.086 Found 0000:18:00.0 (0x8086 - 0x159b) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:31:23.086 Found 0000:18:00.1 (0x8086 - 0x159b) 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.086 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@403 -- # modinfo irdma 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:31:23.087 Found net devices under 0000:18:00.0: cvl_0_0 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:31:23.087 Found net devices under 0000:18:00.1: cvl_0_1 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # rdma_device_init 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:23.087 02:12:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@528 -- # allocate_nic_ips 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:31:23.087 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:23.087 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:31:23.087 altname enp24s0f0np0 00:31:23.087 altname ens785f0np0 00:31:23.087 inet 192.168.100.8/24 scope global cvl_0_0 00:31:23.087 valid_lft forever preferred_lft forever 00:31:23.087 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:31:23.087 valid_lft forever preferred_lft forever 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:31:23.087 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:23.087 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:31:23.087 altname enp24s0f1np1 00:31:23.087 altname ens785f1np1 00:31:23.087 inet 192.168.100.9/24 scope global cvl_0_1 00:31:23.087 valid_lft forever preferred_lft forever 00:31:23.087 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:31:23.087 valid_lft forever preferred_lft forever 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:23.087 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:31:23.088 192.168.100.9' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:31:23.088 192.168.100.9' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # head -n 1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:31:23.088 192.168.100.9' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # tail -n +2 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # head -n 1 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3382722 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3382722 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3382722 ']' 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.088 02:12:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.088 [2024-10-09 02:12:42.301876] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:31:23.088 [2024-10-09 02:12:42.301978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.088 [2024-10-09 02:12:42.431201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:23.088 [2024-10-09 02:12:42.618106] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.088 [2024-10-09 02:12:42.618156] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.088 [2024-10-09 02:12:42.618169] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.088 [2024-10-09 02:12:42.618182] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.088 [2024-10-09 02:12:42.618192] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.088 [2024-10-09 02:12:42.619787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.088 [2024-10-09 02:12:42.619843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.088 [2024-10-09 02:12:42.619850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.349 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.349 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:23.349 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:23.349 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:23.349 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.607 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.607 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:23.607 [2024-10-09 02:12:43.356186] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:31:23.607 [2024-10-09 02:12:43.365804] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:31:23.607 [2024-10-09 02:12:43.365837] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:31:23.607 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:23.865 Malloc0 00:31:23.865 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:24.123 02:12:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:24.382 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:24.641 [2024-10-09 02:12:44.263592] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:24.641 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:24.899 [2024-10-09 02:12:44.464252] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:24.899 [2024-10-09 02:12:44.664965] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3383089 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3383089 /var/tmp/bdevperf.sock 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3383089 ']' 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:24.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.899 02:12:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:25.833 02:12:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:25.833 02:12:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:25.834 02:12:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:26.097 NVMe0n1 00:31:26.097 02:12:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:26.354 00:31:26.354 02:12:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:26.354 02:12:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3383275 00:31:26.354 02:12:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:27.729 02:12:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:27.729 02:12:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:31.016 02:12:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:31.016 00:31:31.016 02:12:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:31.274 02:12:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:34.557 02:12:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:34.557 [2024-10-09 02:12:54.065354] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:34.557 02:12:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:35.490 02:12:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:35.748 02:12:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3383275 00:31:42.330 { 00:31:42.330 "results": [ 00:31:42.330 { 00:31:42.330 "job": "NVMe0n1", 00:31:42.330 "core_mask": "0x1", 00:31:42.330 "workload": "verify", 00:31:42.330 "status": "finished", 00:31:42.330 "verify_range": { 00:31:42.330 "start": 0, 00:31:42.330 "length": 16384 00:31:42.330 }, 00:31:42.330 "queue_depth": 128, 00:31:42.330 "io_size": 4096, 00:31:42.330 "runtime": 15.006748, 00:31:42.330 "iops": 13527.58105886765, 00:31:42.330 "mibps": 52.84211351120176, 00:31:42.330 "io_failed": 3981, 00:31:42.330 "io_timeout": 0, 00:31:42.330 "avg_latency_us": 9254.807411146057, 00:31:42.330 "min_latency_us": 436.31304347826085, 00:31:42.330 "max_latency_us": 609085.8852173913 00:31:42.330 } 00:31:42.330 ], 00:31:42.330 "core_count": 1 00:31:42.330 } 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3383089 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3383089 ']' 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3383089 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3383089 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3383089' 00:31:42.330 killing process with pid 3383089 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3383089 00:31:42.330 02:13:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3383089 00:31:42.904 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:42.904 [2024-10-09 02:12:44.775734] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:31:42.904 [2024-10-09 02:12:44.775838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383089 ] 00:31:42.904 [2024-10-09 02:12:44.904685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.904 [2024-10-09 02:12:45.110937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.904 Running I/O for 15 seconds... 00:31:42.904 15232.00 IOPS, 59.50 MiB/s [2024-10-09T00:13:02.724Z] [2024-10-09 02:12:47.895581] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.904 [2024-10-09 02:12:47.895660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.895976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.895995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x1d647b44 00:31:42.904 [2024-10-09 02:12:47.896284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.904 [2024-10-09 02:12:47.896301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007583000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.896976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.896992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007551000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754f000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754d000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754b000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007549000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007547000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.905 [2024-10-09 02:12:47.897464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0x1d647b44 00:31:42.905 [2024-10-09 02:12:47.897476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007539000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007535000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007531000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752f000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007527000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007521000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.897981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.897993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007513000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007511000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750b000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007509000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007507000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007505000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007503000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007501000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000074ff000 len:0x1000 key:0x1d647b44 00:31:42.906 [2024-10-09 02:12:47.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.906 [2024-10-09 02:12:47.898685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.906 [2024-10-09 02:12:47.898698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.898982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.898995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.899512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:47.899525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.900078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:42.907 [2024-10-09 02:12:47.900100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:42.907 [2024-10-09 02:12:47.900120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4392 len:8 PRP1 0x0 PRP2 0x0 00:31:42.907 [2024-10-09 02:12:47.900136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:47.900351] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a1e5c00 was disconnected and freed. reset controller. 00:31:42.907 [2024-10-09 02:12:47.900370] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:31:42.907 [2024-10-09 02:12:47.900387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:42.907 [2024-10-09 02:12:47.903545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:42.907 [2024-10-09 02:12:47.903621] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.907 [2024-10-09 02:12:47.930849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:42.907 [2024-10-09 02:12:47.982185] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:42.907 10470.00 IOPS, 40.90 MiB/s [2024-10-09T00:13:02.727Z] 12111.33 IOPS, 47.31 MiB/s [2024-10-09T00:13:02.727Z] 12922.25 IOPS, 50.48 MiB/s [2024-10-09T00:13:02.727Z] 12129.20 IOPS, 47.38 MiB/s [2024-10-09T00:13:02.727Z] [2024-10-09 02:12:51.415578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.907 [2024-10-09 02:12:51.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:51.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:51.415725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:51.415756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:51.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x791e357d 00:31:42.907 [2024-10-09 02:12:51.415816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x791e357d 00:31:42.907 [2024-10-09 02:12:51.415848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x791e357d 00:31:42.907 [2024-10-09 02:12:51.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.907 [2024-10-09 02:12:51.415910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.907 [2024-10-09 02:12:51.415927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.415945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.415958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.415974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.415988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.908 [2024-10-09 02:12:51.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007505000 len:0x1000 key:0x791e357d 00:31:42.908 [2024-10-09 02:12:51.416905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007507000 len:0x1000 key:0x791e357d 00:31:42.908 [2024-10-09 02:12:51.416934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007509000 len:0x1000 key:0x791e357d 00:31:42.908 [2024-10-09 02:12:51.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.908 [2024-10-09 02:12:51.416981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750b000 len:0x1000 key:0x791e357d 00:31:42.908 [2024-10-09 02:12:51.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0x791e357d 00:31:42.909 [2024-10-09 02:12:51.417743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.417974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.417987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.909 [2024-10-09 02:12:51.418181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.909 [2024-10-09 02:12:51.418193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.910 [2024-10-09 02:12:51.418936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.418967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.418985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.418997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007551000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754f000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754d000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754b000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007549000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007547000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.910 [2024-10-09 02:12:51.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x791e357d 00:31:42.910 [2024-10-09 02:12:51.419388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.419403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0x791e357d 00:31:42.911 [2024-10-09 02:12:51.419416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.419431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0x791e357d 00:31:42.911 [2024-10-09 02:12:51.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.419457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007539000 len:0x1000 key:0x791e357d 00:31:42.911 [2024-10-09 02:12:51.419469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.420032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:42.911 [2024-10-09 02:12:51.420053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:42.911 [2024-10-09 02:12:51.420067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103616 len:8 PRP1 0x0 PRP2 0x0 00:31:42.911 [2024-10-09 02:12:51.420081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.420287] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff400 was disconnected and freed. reset controller. 00:31:42.911 [2024-10-09 02:12:51.420305] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:31:42.911 [2024-10-09 02:12:51.420324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:42.911 [2024-10-09 02:12:51.420355] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.911 [2024-10-09 02:12:51.420379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.911 [2024-10-09 02:12:51.420396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.420412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.911 [2024-10-09 02:12:51.420425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.420438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.911 [2024-10-09 02:12:51.420450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.420463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.911 [2024-10-09 02:12:51.420476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:51.447916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:42.911 [2024-10-09 02:12:51.447942] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:42.911 [2024-10-09 02:12:51.447960] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.911 [2024-10-09 02:12:51.451074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:42.911 [2024-10-09 02:12:51.502026] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:42.911 12076.00 IOPS, 47.17 MiB/s [2024-10-09T00:13:02.731Z] 12581.57 IOPS, 49.15 MiB/s [2024-10-09T00:13:02.731Z] 12961.00 IOPS, 50.63 MiB/s [2024-10-09T00:13:02.731Z] 13251.33 IOPS, 51.76 MiB/s [2024-10-09T00:13:02.731Z] [2024-10-09 02:12:55.831578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.911 [2024-10-09 02:12:55.831654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.831983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.831999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007503000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007501000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000074ff000 len:0x1000 key:0xc875654c 00:31:42.911 [2024-10-09 02:12:55.832242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.911 [2024-10-09 02:12:55.832442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.911 [2024-10-09 02:12:55.832454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007513000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007511000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750f000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007531000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007535000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007521000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.832982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.832997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754b000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007549000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007547000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754f000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753d000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753b000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007539000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007575000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0xc875654c 00:31:42.912 [2024-10-09 02:12:55.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.912 [2024-10-09 02:12:55.833435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007571000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007527000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.833940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.833993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0xc875654c 00:31:42.913 [2024-10-09 02:12:55.834157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.913 [2024-10-09 02:12:55.834531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.913 [2024-10-09 02:12:55.834550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.914 [2024-10-09 02:12:55.834562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757d000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007583000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007551000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754d000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.914 [2024-10-09 02:12:55.834807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750d000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750b000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007509000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007507000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007505000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.834984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.835025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.914 [2024-10-09 02:12:55.835051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.914 [2024-10-09 02:12:55.835077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.914 [2024-10-09 02:12:55.835109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007577000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.835140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007579000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.835167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757b000 len:0x1000 key:0xc875654c 00:31:42.914 [2024-10-09 02:12:55.835195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.835749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:42.914 [2024-10-09 02:12:55.835772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:42.914 [2024-10-09 02:12:55.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61368 len:8 PRP1 0x0 PRP2 0x0 00:31:42.914 [2024-10-09 02:12:55.835800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.836013] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff400 was disconnected and freed. reset controller. 00:31:42.914 [2024-10-09 02:12:55.836032] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:31:42.914 [2024-10-09 02:12:55.836047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:42.914 [2024-10-09 02:12:55.836078] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:31:42.914 [2024-10-09 02:12:55.836097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.914 [2024-10-09 02:12:55.836111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.836129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.914 [2024-10-09 02:12:55.836143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.836156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.914 [2024-10-09 02:12:55.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.836183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.914 [2024-10-09 02:12:55.836196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.914 [2024-10-09 02:12:55.863846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:42.914 [2024-10-09 02:12:55.863873] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:42.914 [2024-10-09 02:12:55.863889] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.914 [2024-10-09 02:12:55.867013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:42.914 [2024-10-09 02:12:55.912825] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:42.914 12508.10 IOPS, 48.86 MiB/s [2024-10-09T00:13:02.734Z] 12787.91 IOPS, 49.95 MiB/s [2024-10-09T00:13:02.734Z] 13022.75 IOPS, 50.87 MiB/s [2024-10-09T00:13:02.734Z] 13217.69 IOPS, 51.63 MiB/s [2024-10-09T00:13:02.734Z] 13382.07 IOPS, 52.27 MiB/s [2024-10-09T00:13:02.734Z] 13527.33 IOPS, 52.84 MiB/s 00:31:42.914 Latency(us) 00:31:42.914 [2024-10-09T00:13:02.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.914 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:42.914 Verification LBA range: start 0x0 length 0x4000 00:31:42.914 NVMe0n1 : 15.01 13527.58 52.84 265.28 0.00 9254.81 436.31 609085.89 00:31:42.914 [2024-10-09T00:13:02.734Z] =================================================================================================================== 00:31:42.914 [2024-10-09T00:13:02.734Z] Total : 13527.58 52.84 265.28 0.00 9254.81 436.31 609085.89 00:31:42.914 Received shutdown signal, test time was about 15.000000 seconds 00:31:42.914 00:31:42.914 Latency(us) 00:31:42.914 [2024-10-09T00:13:02.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.914 [2024-10-09T00:13:02.734Z] =================================================================================================================== 00:31:42.914 [2024-10-09T00:13:02.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.914 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:42.914 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:42.914 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:42.914 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3385372 00:31:42.914 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3385372 /var/tmp/bdevperf.sock 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3385372 ']' 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.915 02:13:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.851 02:13:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:43.851 02:13:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:43.851 02:13:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:43.851 [2024-10-09 02:13:03.594128] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:43.851 02:13:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:44.110 [2024-10-09 02:13:03.798848] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:31:44.110 02:13:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:44.369 NVMe0n1 00:31:44.369 02:13:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:44.627 00:31:44.627 02:13:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:44.886 00:31:44.886 02:13:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:44.886 02:13:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:45.144 02:13:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:45.402 02:13:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:48.763 02:13:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:48.763 02:13:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:48.763 02:13:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3386082 00:31:48.763 02:13:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.763 02:13:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3386082 00:31:49.699 { 00:31:49.699 "results": [ 00:31:49.699 { 00:31:49.699 "job": "NVMe0n1", 00:31:49.699 "core_mask": "0x1", 00:31:49.699 "workload": "verify", 00:31:49.699 "status": "finished", 00:31:49.699 "verify_range": { 00:31:49.699 "start": 0, 00:31:49.699 "length": 16384 00:31:49.699 }, 00:31:49.699 "queue_depth": 128, 00:31:49.699 "io_size": 4096, 00:31:49.699 "runtime": 1.006549, 00:31:49.699 "iops": 15387.229037036448, 00:31:49.699 "mibps": 60.106363425923625, 00:31:49.699 "io_failed": 0, 00:31:49.699 "io_timeout": 0, 00:31:49.699 "avg_latency_us": 8272.353920229967, 00:31:49.699 "min_latency_us": 3063.095652173913, 00:31:49.699 "max_latency_us": 19033.93391304348 00:31:49.699 } 00:31:49.699 ], 00:31:49.699 "core_count": 1 00:31:49.699 } 00:31:49.699 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:49.699 [2024-10-09 02:13:02.599391] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:31:49.699 [2024-10-09 02:13:02.599496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385372 ] 00:31:49.699 [2024-10-09 02:13:02.728040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.699 [2024-10-09 02:13:02.926756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.699 [2024-10-09 02:13:05.033821] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:31:49.699 [2024-10-09 02:13:05.035083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.699 [2024-10-09 02:13:05.035149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.699 [2024-10-09 02:13:05.068102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:49.699 [2024-10-09 02:13:05.093619] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:49.699 Running I/O for 1 seconds... 00:31:49.699 15360.00 IOPS, 60.00 MiB/s 00:31:49.699 Latency(us) 00:31:49.699 [2024-10-09T00:13:09.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.699 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:49.699 Verification LBA range: start 0x0 length 0x4000 00:31:49.699 NVMe0n1 : 1.01 15387.23 60.11 0.00 0.00 8272.35 3063.10 19033.93 00:31:49.699 [2024-10-09T00:13:09.519Z] =================================================================================================================== 00:31:49.699 [2024-10-09T00:13:09.519Z] Total : 15387.23 60.11 0.00 0.00 8272.35 3063.10 19033.93 00:31:49.699 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:49.699 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:49.958 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:50.217 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:50.217 02:13:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:50.217 02:13:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:50.475 02:13:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3385372 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3385372 ']' 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3385372 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3385372 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3385372' 00:31:53.762 killing process with pid 3385372 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3385372 00:31:53.762 02:13:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3385372 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:55.139 rmmod nvme_rdma 00:31:55.139 rmmod nvme_fabrics 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3382722 ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3382722 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3382722 ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3382722 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3382722 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3382722' 00:31:55.139 killing process with pid 3382722 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3382722 00:31:55.139 02:13:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3382722 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:31:57.046 00:31:57.046 real 0m40.356s 00:31:57.046 user 2m16.131s 00:31:57.046 sys 0m7.444s 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.046 ************************************ 00:31:57.046 END TEST nvmf_failover 00:31:57.046 ************************************ 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.046 ************************************ 00:31:57.046 START TEST nvmf_host_discovery 00:31:57.046 ************************************ 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:31:57.046 * Looking for test storage... 00:31:57.046 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.046 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.047 --rc genhtml_branch_coverage=1 00:31:57.047 --rc genhtml_function_coverage=1 00:31:57.047 --rc genhtml_legend=1 00:31:57.047 --rc geninfo_all_blocks=1 00:31:57.047 --rc geninfo_unexecuted_blocks=1 00:31:57.047 00:31:57.047 ' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.047 --rc genhtml_branch_coverage=1 00:31:57.047 --rc genhtml_function_coverage=1 00:31:57.047 --rc genhtml_legend=1 00:31:57.047 --rc geninfo_all_blocks=1 00:31:57.047 --rc geninfo_unexecuted_blocks=1 00:31:57.047 00:31:57.047 ' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.047 --rc genhtml_branch_coverage=1 00:31:57.047 --rc genhtml_function_coverage=1 00:31:57.047 --rc genhtml_legend=1 00:31:57.047 --rc geninfo_all_blocks=1 00:31:57.047 --rc geninfo_unexecuted_blocks=1 00:31:57.047 00:31:57.047 ' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.047 --rc genhtml_branch_coverage=1 00:31:57.047 --rc genhtml_function_coverage=1 00:31:57.047 --rc genhtml_legend=1 00:31:57.047 --rc geninfo_all_blocks=1 00:31:57.047 --rc geninfo_unexecuted_blocks=1 00:31:57.047 00:31:57.047 ' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.047 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:57.047 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:31:57.047 00:31:57.047 real 0m0.212s 00:31:57.047 user 0m0.128s 00:31:57.047 sys 0m0.102s 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.047 ************************************ 00:31:57.047 END TEST nvmf_host_discovery 00:31:57.047 ************************************ 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.047 ************************************ 00:31:57.047 START TEST nvmf_host_multipath_status 00:31:57.047 ************************************ 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:31:57.047 * Looking for test storage... 00:31:57.047 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:31:57.047 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:57.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.308 --rc genhtml_branch_coverage=1 00:31:57.308 --rc genhtml_function_coverage=1 00:31:57.308 --rc genhtml_legend=1 00:31:57.308 --rc geninfo_all_blocks=1 00:31:57.308 --rc geninfo_unexecuted_blocks=1 00:31:57.308 00:31:57.308 ' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:57.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.308 --rc genhtml_branch_coverage=1 00:31:57.308 --rc genhtml_function_coverage=1 00:31:57.308 --rc genhtml_legend=1 00:31:57.308 --rc geninfo_all_blocks=1 00:31:57.308 --rc geninfo_unexecuted_blocks=1 00:31:57.308 00:31:57.308 ' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:57.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.308 --rc genhtml_branch_coverage=1 00:31:57.308 --rc genhtml_function_coverage=1 00:31:57.308 --rc genhtml_legend=1 00:31:57.308 --rc geninfo_all_blocks=1 00:31:57.308 --rc geninfo_unexecuted_blocks=1 00:31:57.308 00:31:57.308 ' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:57.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.308 --rc genhtml_branch_coverage=1 00:31:57.308 --rc genhtml_function_coverage=1 00:31:57.308 --rc genhtml_legend=1 00:31:57.308 --rc geninfo_all_blocks=1 00:31:57.308 --rc geninfo_unexecuted_blocks=1 00:31:57.308 00:31:57.308 ' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.308 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/bpftrace.sh 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:57.308 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.309 02:13:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:32:03.879 Found 0000:18:00.0 (0x8086 - 0x159b) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:32:03.879 Found 0000:18:00.1 (0x8086 - 0x159b) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@403 -- # modinfo irdma 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:32:03.879 Found net devices under 0000:18:00.0: cvl_0_0 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:32:03.879 Found net devices under 0000:18:00.1: cvl_0_1 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:03.879 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # rdma_device_init 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:03.880 02:13:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@528 -- # allocate_nic_ips 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:32:03.880 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:03.880 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:32:03.880 altname enp24s0f0np0 00:32:03.880 altname ens785f0np0 00:32:03.880 inet 192.168.100.8/24 scope global cvl_0_0 00:32:03.880 valid_lft forever preferred_lft forever 00:32:03.880 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:32:03.880 valid_lft forever preferred_lft forever 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:32:03.880 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:03.880 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:32:03.880 altname enp24s0f1np1 00:32:03.880 altname ens785f1np1 00:32:03.880 inet 192.168.100.9/24 scope global cvl_0_1 00:32:03.880 valid_lft forever preferred_lft forever 00:32:03.880 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:32:03.880 valid_lft forever preferred_lft forever 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:32:03.880 192.168.100.9' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # head -n 1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:32:03.880 192.168.100.9' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:32:03.880 192.168.100.9' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # tail -n +2 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # head -n 1 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:03.880 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3390108 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3390108 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3390108 ']' 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:03.881 02:13:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:03.881 [2024-10-09 02:13:23.267805] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:32:03.881 [2024-10-09 02:13:23.267918] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.881 [2024-10-09 02:13:23.391267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:03.881 [2024-10-09 02:13:23.573671] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.881 [2024-10-09 02:13:23.573733] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.881 [2024-10-09 02:13:23.573746] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.881 [2024-10-09 02:13:23.573759] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.881 [2024-10-09 02:13:23.573770] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.881 [2024-10-09 02:13:23.575173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.881 [2024-10-09 02:13:23.575191] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3390108 00:32:04.448 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:04.706 [2024-10-09 02:13:24.323430] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028cc0/0x617000007c40) succeed. 00:32:04.706 [2024-10-09 02:13:24.332924] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000028e40/0x617000007fc0) succeed. 00:32:04.706 [2024-10-09 02:13:24.332959] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:32:04.706 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:04.964 Malloc0 00:32:04.964 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:05.222 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:05.223 02:13:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:05.481 [2024-10-09 02:13:25.137756] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:05.481 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:05.739 [2024-10-09 02:13:25.318327] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3390318 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3390318 /var/tmp/bdevperf.sock 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3390318 ']' 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:05.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.739 02:13:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.673 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:06.673 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:06.673 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:06.931 Nvme0n1 00:32:06.931 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:07.190 Nvme0n1 00:32:07.191 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:07.191 02:13:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:09.722 02:13:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:09.722 02:13:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:32:09.722 02:13:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:09.722 02:13:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:10.657 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:10.657 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:10.657 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.657 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.915 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.915 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:10.915 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.915 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.174 02:13:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:11.432 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.432 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:11.432 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:11.432 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.691 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.691 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:11.691 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:11.691 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.949 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.950 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:11.950 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:11.950 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:12.208 02:13:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:13.143 02:13:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:13.143 02:13:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:13.143 02:13:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.143 02:13:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.401 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:13.401 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:13.401 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.401 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:13.660 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.660 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:13.660 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.660 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.918 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.177 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.177 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:14.177 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.177 02:13:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:14.435 02:13:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.435 02:13:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:14.435 02:13:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:14.694 02:13:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:32:14.952 02:13:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:15.888 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:15.888 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:15.888 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.889 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.148 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.148 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:16.148 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.148 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.148 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:16.149 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.149 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.149 02:13:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:16.407 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.407 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:16.407 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.407 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:16.665 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.665 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:16.665 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.666 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:16.924 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:17.182 02:13:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:17.441 02:13:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:18.376 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:18.376 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:18.376 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.376 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:18.634 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.634 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:18.634 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.634 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:18.893 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:18.893 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:18.893 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:18.893 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.162 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:19.163 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.163 02:13:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:19.421 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.421 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:19.421 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:19.421 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.679 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:19.679 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:19.679 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:32:19.938 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:19.938 02:13:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:21.315 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.316 02:13:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.575 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:21.834 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.834 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:21.834 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.834 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:22.093 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:22.093 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:22.093 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.093 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:22.352 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:22.352 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:22.352 02:13:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:32:22.352 02:13:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:22.611 02:13:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:23.548 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:23.548 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:23.548 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.548 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:23.806 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.806 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:23.806 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.806 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:24.065 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.065 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:24.065 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.065 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:24.324 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.324 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:24.324 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.324 02:13:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.584 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:24.843 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.843 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:25.101 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:25.101 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:32:25.361 02:13:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:25.620 02:13:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:26.558 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:26.558 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:26.558 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:26.558 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.817 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:27.076 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.076 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:27.076 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.076 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.335 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.335 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:27.335 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.335 02:13:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:27.594 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:27.853 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:28.112 02:13:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:29.048 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:29.048 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:29.048 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.048 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.307 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.307 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:29.307 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.307 02:13:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:29.566 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.566 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:29.566 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:29.566 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.825 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.084 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.084 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:30.084 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.084 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.342 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.342 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:30.342 02:13:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:30.599 02:13:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:32:30.599 02:13:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:31.976 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:31.977 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:31.977 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:32.236 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.236 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:32.236 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.236 02:13:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:32.236 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.236 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:32.236 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.236 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:32.495 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.495 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:32.495 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:32.495 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.754 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.754 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:32.754 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.754 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:33.013 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.013 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:33.013 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:33.013 02:13:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:33.272 02:13:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.650 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:34.910 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.910 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:34.910 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.910 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:35.169 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.169 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:35.169 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.169 02:13:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3390318 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3390318 ']' 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3390318 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.428 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3390318 00:32:35.687 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:35.688 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:35.688 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3390318' 00:32:35.688 killing process with pid 3390318 00:32:35.688 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3390318 00:32:35.688 02:13:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3390318 00:32:35.688 { 00:32:35.688 "results": [ 00:32:35.688 { 00:32:35.688 "job": "Nvme0n1", 00:32:35.688 "core_mask": "0x4", 00:32:35.688 "workload": "verify", 00:32:35.688 "status": "terminated", 00:32:35.688 "verify_range": { 00:32:35.688 "start": 0, 00:32:35.688 "length": 16384 00:32:35.688 }, 00:32:35.688 "queue_depth": 128, 00:32:35.688 "io_size": 4096, 00:32:35.688 "runtime": 28.154388, 00:32:35.688 "iops": 14010.462596452106, 00:32:35.688 "mibps": 54.72836951739104, 00:32:35.688 "io_failed": 0, 00:32:35.688 "io_timeout": 0, 00:32:35.688 "avg_latency_us": 9113.665129759334, 00:32:35.688 "min_latency_us": 619.7426086956522, 00:32:35.688 "max_latency_us": 3019898.88 00:32:35.688 } 00:32:35.688 ], 00:32:35.688 "core_count": 1 00:32:35.688 } 00:32:36.627 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3390318 00:32:36.627 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:36.627 [2024-10-09 02:13:25.407195] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:32:36.627 [2024-10-09 02:13:25.407303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390318 ] 00:32:36.627 [2024-10-09 02:13:25.534600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.627 [2024-10-09 02:13:25.734606] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.627 Running I/O for 90 seconds... 00:32:36.627 15744.00 IOPS, 61.50 MiB/s [2024-10-09T00:13:56.447Z] 16045.00 IOPS, 62.68 MiB/s [2024-10-09T00:13:56.447Z] 16084.00 IOPS, 62.83 MiB/s [2024-10-09T00:13:56.447Z] 16064.00 IOPS, 62.75 MiB/s [2024-10-09T00:13:56.447Z] 16053.60 IOPS, 62.71 MiB/s [2024-10-09T00:13:56.447Z] 16106.67 IOPS, 62.92 MiB/s [2024-10-09T00:13:56.447Z] 16106.14 IOPS, 62.91 MiB/s [2024-10-09T00:13:56.447Z] 16129.50 IOPS, 63.01 MiB/s [2024-10-09T00:13:56.447Z] 16115.11 IOPS, 62.95 MiB/s [2024-10-09T00:13:56.447Z] 16101.30 IOPS, 62.90 MiB/s [2024-10-09T00:13:56.447Z] 16105.82 IOPS, 62.91 MiB/s [2024-10-09T00:13:56.447Z] 16106.08 IOPS, 62.91 MiB/s [2024-10-09T00:13:56.447Z] [2024-10-09 02:13:39.506298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007515000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751d000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751f000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007521000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007523000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007527000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.506835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.506866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.506895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.506924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.506953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.506982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.506997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.507012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.507027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.507046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.507061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.627 [2024-10-09 02:13:39.507075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x8c4cf39d 00:32:36.627 [2024-10-09 02:13:39.507105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.627 [2024-10-09 02:13:39.507121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.507907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.507936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.507966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.507981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.507995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.508027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.508056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.508090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.508119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.628 [2024-10-09 02:13:39.508148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007537000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.508178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007535000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.508226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007533000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.508257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007531000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.508291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752f000 len:0x1000 key:0x8c4cf39d 00:32:36.628 [2024-10-09 02:13:39.508321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.628 [2024-10-09 02:13:39.508337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752d000 len:0x1000 key:0x8c4cf39d 00:32:36.629 [2024-10-09 02:13:39.508352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752b000 len:0x1000 key:0x8c4cf39d 00:32:36.629 [2024-10-09 02:13:39.508382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007529000 len:0x1000 key:0x8c4cf39d 00:32:36.629 [2024-10-09 02:13:39.508413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.508968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.508984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.509933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.509948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.510174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.510192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.510214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.510232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.510254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.510269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.629 [2024-10-09 02:13:39.510290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.629 [2024-10-09 02:13:39.510305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.510968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.510989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:39.511206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:39.511223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.630 15386.69 IOPS, 60.10 MiB/s [2024-10-09T00:13:56.450Z] 14287.64 IOPS, 55.81 MiB/s [2024-10-09T00:13:56.450Z] 13335.13 IOPS, 52.09 MiB/s [2024-10-09T00:13:56.450Z] 13087.44 IOPS, 51.12 MiB/s [2024-10-09T00:13:56.450Z] 13277.65 IOPS, 51.87 MiB/s [2024-10-09T00:13:56.450Z] 13428.56 IOPS, 52.46 MiB/s [2024-10-09T00:13:56.450Z] 13457.68 IOPS, 52.57 MiB/s [2024-10-09T00:13:56.450Z] 13468.85 IOPS, 52.61 MiB/s [2024-10-09T00:13:56.450Z] 13557.05 IOPS, 52.96 MiB/s [2024-10-09T00:13:56.450Z] 13692.41 IOPS, 53.49 MiB/s [2024-10-09T00:13:56.450Z] 13808.22 IOPS, 53.94 MiB/s [2024-10-09T00:13:56.450Z] 13834.67 IOPS, 54.04 MiB/s [2024-10-09T00:13:56.450Z] 13832.84 IOPS, 54.03 MiB/s [2024-10-09T00:13:56.450Z] [2024-10-09 02:13:52.999646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:52.999721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:52.999770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:52.999787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:52.999804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:52.999819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:52.999835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:52.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756f000 len:0x1000 key:0x8c4cf39d 00:32:36.630 [2024-10-09 02:13:53.000374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x8c4cf39d 00:32:36.630 [2024-10-09 02:13:53.000501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.630 [2024-10-09 02:13:53.000606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.630 [2024-10-09 02:13:53.000621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007517000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007503000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007519000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.000947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.000977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.000993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007573000 len:0x1000 key:0x8c4cf39d 00:32:36.631 [2024-10-09 02:13:53.001601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.631 [2024-10-09 02:13:53.001633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.631 [2024-10-09 02:13:53.001649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007513000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007525000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007509000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.001784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.001813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.001846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.001914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.001946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751b000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.001976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.001994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.002009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.002039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.002070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.002102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.632 [2024-10-09 02:13:53.002162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.632 [2024-10-09 02:13:53.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x8c4cf39d 00:32:36.632 [2024-10-09 02:13:53.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.632 13836.92 IOPS, 54.05 MiB/s [2024-10-09T00:13:56.452Z] 13925.59 IOPS, 54.40 MiB/s [2024-10-09T00:13:56.452Z] 14004.82 IOPS, 54.71 MiB/s [2024-10-09T00:13:56.452Z] Received shutdown signal, test time was about 28.155068 seconds 00:32:36.632 00:32:36.632 Latency(us) 00:32:36.632 [2024-10-09T00:13:56.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.632 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:36.632 Verification LBA range: start 0x0 length 0x4000 00:32:36.632 Nvme0n1 : 28.15 14010.46 54.73 0.00 0.00 9113.67 619.74 3019898.88 00:32:36.632 [2024-10-09T00:13:56.452Z] =================================================================================================================== 00:32:36.632 [2024-10-09T00:13:56.452Z] Total : 14010.46 54.73 0.00 0.00 9113.67 619.74 3019898.88 00:32:36.632 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:36.891 rmmod nvme_rdma 00:32:36.891 rmmod nvme_fabrics 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3390108 ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3390108 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3390108 ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3390108 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3390108 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3390108' 00:32:36.891 killing process with pid 3390108 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3390108 00:32:36.891 02:13:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3390108 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:32:38.857 00:32:38.857 real 0m41.419s 00:32:38.857 user 1m57.376s 00:32:38.857 sys 0m9.173s 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:38.857 ************************************ 00:32:38.857 END TEST nvmf_host_multipath_status 00:32:38.857 ************************************ 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.857 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.857 ************************************ 00:32:38.858 START TEST nvmf_discovery_remove_ifc 00:32:38.858 ************************************ 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:32:38.858 * Looking for test storage... 00:32:38.858 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.858 --rc genhtml_branch_coverage=1 00:32:38.858 --rc genhtml_function_coverage=1 00:32:38.858 --rc genhtml_legend=1 00:32:38.858 --rc geninfo_all_blocks=1 00:32:38.858 --rc geninfo_unexecuted_blocks=1 00:32:38.858 00:32:38.858 ' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.858 --rc genhtml_branch_coverage=1 00:32:38.858 --rc genhtml_function_coverage=1 00:32:38.858 --rc genhtml_legend=1 00:32:38.858 --rc geninfo_all_blocks=1 00:32:38.858 --rc geninfo_unexecuted_blocks=1 00:32:38.858 00:32:38.858 ' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.858 --rc genhtml_branch_coverage=1 00:32:38.858 --rc genhtml_function_coverage=1 00:32:38.858 --rc genhtml_legend=1 00:32:38.858 --rc geninfo_all_blocks=1 00:32:38.858 --rc geninfo_unexecuted_blocks=1 00:32:38.858 00:32:38.858 ' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:38.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.858 --rc genhtml_branch_coverage=1 00:32:38.858 --rc genhtml_function_coverage=1 00:32:38.858 --rc genhtml_legend=1 00:32:38.858 --rc geninfo_all_blocks=1 00:32:38.858 --rc geninfo_unexecuted_blocks=1 00:32:38.858 00:32:38.858 ' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.858 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:38.859 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:32:38.859 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:32:38.859 00:32:38.859 real 0m0.180s 00:32:38.859 user 0m0.100s 00:32:38.859 sys 0m0.087s 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.859 ************************************ 00:32:38.859 END TEST nvmf_discovery_remove_ifc 00:32:38.859 ************************************ 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.859 ************************************ 00:32:38.859 START TEST nvmf_identify_kernel_target 00:32:38.859 ************************************ 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:32:38.859 * Looking for test storage... 00:32:38.859 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.859 --rc genhtml_branch_coverage=1 00:32:38.859 --rc genhtml_function_coverage=1 00:32:38.859 --rc genhtml_legend=1 00:32:38.859 --rc geninfo_all_blocks=1 00:32:38.859 --rc geninfo_unexecuted_blocks=1 00:32:38.859 00:32:38.859 ' 00:32:38.859 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.859 --rc genhtml_branch_coverage=1 00:32:38.859 --rc genhtml_function_coverage=1 00:32:38.859 --rc genhtml_legend=1 00:32:38.859 --rc geninfo_all_blocks=1 00:32:38.860 --rc geninfo_unexecuted_blocks=1 00:32:38.860 00:32:38.860 ' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.860 --rc genhtml_branch_coverage=1 00:32:38.860 --rc genhtml_function_coverage=1 00:32:38.860 --rc genhtml_legend=1 00:32:38.860 --rc geninfo_all_blocks=1 00:32:38.860 --rc geninfo_unexecuted_blocks=1 00:32:38.860 00:32:38.860 ' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.860 --rc genhtml_branch_coverage=1 00:32:38.860 --rc genhtml_function_coverage=1 00:32:38.860 --rc genhtml_legend=1 00:32:38.860 --rc geninfo_all_blocks=1 00:32:38.860 --rc geninfo_unexecuted_blocks=1 00:32:38.860 00:32:38.860 ' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:38.860 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.860 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.151 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:39.151 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:39.151 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.151 02:13:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:32:45.723 Found 0000:18:00.0 (0x8086 - 0x159b) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:32:45.723 Found 0000:18:00.1 (0x8086 - 0x159b) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@403 -- # modinfo irdma 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:32:45.723 Found net devices under 0000:18:00.0: cvl_0_0 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:32:45.723 Found net devices under 0000:18:00.1: cvl_0_1 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # rdma_device_init 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:45.723 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:32:45.724 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:45.724 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:32:45.724 altname enp24s0f0np0 00:32:45.724 altname ens785f0np0 00:32:45.724 inet 192.168.100.8/24 scope global cvl_0_0 00:32:45.724 valid_lft forever preferred_lft forever 00:32:45.724 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:32:45.724 valid_lft forever preferred_lft forever 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:32:45.724 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:45.724 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:32:45.724 altname enp24s0f1np1 00:32:45.724 altname ens785f1np1 00:32:45.724 inet 192.168.100.9/24 scope global cvl_0_1 00:32:45.724 valid_lft forever preferred_lft forever 00:32:45.724 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:32:45.724 valid_lft forever preferred_lft forever 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:32:45.724 192.168.100.9' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:32:45.724 192.168.100.9' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # head -n 1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:32:45.724 192.168.100.9' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # tail -n +2 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # head -n 1 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:32:45.724 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:45.725 02:14:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:32:49.016 Waiting for block devices as requested 00:32:49.016 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:49.016 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:49.016 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:49.275 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:49.275 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:49.275 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:49.275 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:49.534 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:49.534 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:49.534 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:49.792 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:49.792 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:49.792 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:50.051 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:50.051 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:50.051 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:50.310 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:50.310 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:32:50.310 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:50.311 No valid GPT data, bailing 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.311 02:14:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo rdma 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:50.311 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -a 192.168.100.8 -t rdma -s 4420 00:32:50.571 00:32:50.571 Discovery Log Number of Records 2, Generation counter 2 00:32:50.571 =====Discovery Log Entry 0====== 00:32:50.571 trtype: rdma 00:32:50.571 adrfam: ipv4 00:32:50.571 subtype: current discovery subsystem 00:32:50.571 treq: not specified, sq flow control disable supported 00:32:50.571 portid: 1 00:32:50.571 trsvcid: 4420 00:32:50.571 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:50.571 traddr: 192.168.100.8 00:32:50.571 eflags: none 00:32:50.571 rdma_prtype: not specified 00:32:50.571 rdma_qptype: connected 00:32:50.571 rdma_cms: rdma-cm 00:32:50.571 rdma_pkey: 0x0000 00:32:50.571 =====Discovery Log Entry 1====== 00:32:50.571 trtype: rdma 00:32:50.571 adrfam: ipv4 00:32:50.571 subtype: nvme subsystem 00:32:50.571 treq: not specified, sq flow control disable supported 00:32:50.571 portid: 1 00:32:50.571 trsvcid: 4420 00:32:50.571 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:50.571 traddr: 192.168.100.8 00:32:50.571 eflags: none 00:32:50.571 rdma_prtype: not specified 00:32:50.571 rdma_qptype: connected 00:32:50.571 rdma_cms: rdma-cm 00:32:50.571 rdma_pkey: 0x0000 00:32:50.571 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:32:50.571 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:50.571 ===================================================== 00:32:50.571 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:50.571 ===================================================== 00:32:50.571 Controller Capabilities/Features 00:32:50.571 ================================ 00:32:50.571 Vendor ID: 0000 00:32:50.571 Subsystem Vendor ID: 0000 00:32:50.571 Serial Number: 3934c845f17b6d090372 00:32:50.571 Model Number: Linux 00:32:50.571 Firmware Version: 6.8.9-20 00:32:50.571 Recommended Arb Burst: 0 00:32:50.571 IEEE OUI Identifier: 00 00 00 00:32:50.571 Multi-path I/O 00:32:50.571 May have multiple subsystem ports: No 00:32:50.571 May have multiple controllers: No 00:32:50.571 Associated with SR-IOV VF: No 00:32:50.571 Max Data Transfer Size: Unlimited 00:32:50.571 Max Number of Namespaces: 0 00:32:50.571 Max Number of I/O Queues: 1024 00:32:50.571 NVMe Specification Version (VS): 1.3 00:32:50.571 NVMe Specification Version (Identify): 1.3 00:32:50.571 Maximum Queue Entries: 128 00:32:50.571 Contiguous Queues Required: No 00:32:50.571 Arbitration Mechanisms Supported 00:32:50.571 Weighted Round Robin: Not Supported 00:32:50.571 Vendor Specific: Not Supported 00:32:50.571 Reset Timeout: 7500 ms 00:32:50.571 Doorbell Stride: 4 bytes 00:32:50.571 NVM Subsystem Reset: Not Supported 00:32:50.571 Command Sets Supported 00:32:50.571 NVM Command Set: Supported 00:32:50.571 Boot Partition: Not Supported 00:32:50.571 Memory Page Size Minimum: 4096 bytes 00:32:50.571 Memory Page Size Maximum: 4096 bytes 00:32:50.571 Persistent Memory Region: Not Supported 00:32:50.571 Optional Asynchronous Events Supported 00:32:50.571 Namespace Attribute Notices: Not Supported 00:32:50.571 Firmware Activation Notices: Not Supported 00:32:50.571 ANA Change Notices: Not Supported 00:32:50.571 PLE Aggregate Log Change Notices: Not Supported 00:32:50.571 LBA Status Info Alert Notices: Not Supported 00:32:50.571 EGE Aggregate Log Change Notices: Not Supported 00:32:50.571 Normal NVM Subsystem Shutdown event: Not Supported 00:32:50.571 Zone Descriptor Change Notices: Not Supported 00:32:50.571 Discovery Log Change Notices: Supported 00:32:50.571 Controller Attributes 00:32:50.571 128-bit Host Identifier: Not Supported 00:32:50.571 Non-Operational Permissive Mode: Not Supported 00:32:50.571 NVM Sets: Not Supported 00:32:50.571 Read Recovery Levels: Not Supported 00:32:50.571 Endurance Groups: Not Supported 00:32:50.571 Predictable Latency Mode: Not Supported 00:32:50.571 Traffic Based Keep ALive: Not Supported 00:32:50.571 Namespace Granularity: Not Supported 00:32:50.571 SQ Associations: Not Supported 00:32:50.571 UUID List: Not Supported 00:32:50.571 Multi-Domain Subsystem: Not Supported 00:32:50.571 Fixed Capacity Management: Not Supported 00:32:50.571 Variable Capacity Management: Not Supported 00:32:50.571 Delete Endurance Group: Not Supported 00:32:50.571 Delete NVM Set: Not Supported 00:32:50.571 Extended LBA Formats Supported: Not Supported 00:32:50.571 Flexible Data Placement Supported: Not Supported 00:32:50.571 00:32:50.571 Controller Memory Buffer Support 00:32:50.571 ================================ 00:32:50.571 Supported: No 00:32:50.571 00:32:50.571 Persistent Memory Region Support 00:32:50.571 ================================ 00:32:50.571 Supported: No 00:32:50.571 00:32:50.571 Admin Command Set Attributes 00:32:50.571 ============================ 00:32:50.571 Security Send/Receive: Not Supported 00:32:50.571 Format NVM: Not Supported 00:32:50.571 Firmware Activate/Download: Not Supported 00:32:50.571 Namespace Management: Not Supported 00:32:50.571 Device Self-Test: Not Supported 00:32:50.571 Directives: Not Supported 00:32:50.571 NVMe-MI: Not Supported 00:32:50.571 Virtualization Management: Not Supported 00:32:50.571 Doorbell Buffer Config: Not Supported 00:32:50.571 Get LBA Status Capability: Not Supported 00:32:50.571 Command & Feature Lockdown Capability: Not Supported 00:32:50.571 Abort Command Limit: 1 00:32:50.571 Async Event Request Limit: 1 00:32:50.571 Number of Firmware Slots: N/A 00:32:50.571 Firmware Slot 1 Read-Only: N/A 00:32:50.571 Firmware Activation Without Reset: N/A 00:32:50.571 Multiple Update Detection Support: N/A 00:32:50.571 Firmware Update Granularity: No Information Provided 00:32:50.571 Per-Namespace SMART Log: No 00:32:50.571 Asymmetric Namespace Access Log Page: Not Supported 00:32:50.571 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:50.571 Command Effects Log Page: Not Supported 00:32:50.571 Get Log Page Extended Data: Supported 00:32:50.571 Telemetry Log Pages: Not Supported 00:32:50.571 Persistent Event Log Pages: Not Supported 00:32:50.571 Supported Log Pages Log Page: May Support 00:32:50.571 Commands Supported & Effects Log Page: Not Supported 00:32:50.571 Feature Identifiers & Effects Log Page:May Support 00:32:50.571 NVMe-MI Commands & Effects Log Page: May Support 00:32:50.571 Data Area 4 for Telemetry Log: Not Supported 00:32:50.571 Error Log Page Entries Supported: 1 00:32:50.571 Keep Alive: Not Supported 00:32:50.571 00:32:50.571 NVM Command Set Attributes 00:32:50.571 ========================== 00:32:50.571 Submission Queue Entry Size 00:32:50.571 Max: 1 00:32:50.571 Min: 1 00:32:50.571 Completion Queue Entry Size 00:32:50.571 Max: 1 00:32:50.571 Min: 1 00:32:50.571 Number of Namespaces: 0 00:32:50.571 Compare Command: Not Supported 00:32:50.571 Write Uncorrectable Command: Not Supported 00:32:50.571 Dataset Management Command: Not Supported 00:32:50.571 Write Zeroes Command: Not Supported 00:32:50.571 Set Features Save Field: Not Supported 00:32:50.571 Reservations: Not Supported 00:32:50.571 Timestamp: Not Supported 00:32:50.571 Copy: Not Supported 00:32:50.571 Volatile Write Cache: Not Present 00:32:50.571 Atomic Write Unit (Normal): 1 00:32:50.571 Atomic Write Unit (PFail): 1 00:32:50.571 Atomic Compare & Write Unit: 1 00:32:50.571 Fused Compare & Write: Not Supported 00:32:50.571 Scatter-Gather List 00:32:50.571 SGL Command Set: Supported 00:32:50.571 SGL Keyed: Supported 00:32:50.571 SGL Bit Bucket Descriptor: Not Supported 00:32:50.571 SGL Metadata Pointer: Not Supported 00:32:50.571 Oversized SGL: Not Supported 00:32:50.571 SGL Metadata Address: Not Supported 00:32:50.571 SGL Offset: Supported 00:32:50.571 Transport SGL Data Block: Not Supported 00:32:50.571 Replay Protected Memory Block: Not Supported 00:32:50.571 00:32:50.571 Firmware Slot Information 00:32:50.571 ========================= 00:32:50.571 Active slot: 0 00:32:50.571 00:32:50.571 00:32:50.571 Error Log 00:32:50.571 ========= 00:32:50.571 00:32:50.571 Active Namespaces 00:32:50.571 ================= 00:32:50.571 Discovery Log Page 00:32:50.571 ================== 00:32:50.571 Generation Counter: 2 00:32:50.571 Number of Records: 2 00:32:50.571 Record Format: 0 00:32:50.571 00:32:50.571 Discovery Log Entry 0 00:32:50.571 ---------------------- 00:32:50.571 Transport Type: 1 (RDMA) 00:32:50.571 Address Family: 1 (IPv4) 00:32:50.571 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:50.571 Entry Flags: 00:32:50.571 Duplicate Returned Information: 0 00:32:50.571 Explicit Persistent Connection Support for Discovery: 0 00:32:50.572 Transport Requirements: 00:32:50.572 Secure Channel: Not Specified 00:32:50.572 Port ID: 1 (0x0001) 00:32:50.572 Controller ID: 65535 (0xffff) 00:32:50.572 Admin Max SQ Size: 32 00:32:50.572 Transport Service Identifier: 4420 00:32:50.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:50.572 Transport Address: 192.168.100.8 00:32:50.572 Transport Specific Address Subtype - RDMA 00:32:50.572 RDMA QP Service Type: 1 (Reliable Connected) 00:32:50.572 RDMA Provider Type: 1 (No provider specified) 00:32:50.572 RDMA CM Service: 1 (RDMA_CM) 00:32:50.572 Discovery Log Entry 1 00:32:50.572 ---------------------- 00:32:50.572 Transport Type: 1 (RDMA) 00:32:50.572 Address Family: 1 (IPv4) 00:32:50.572 Subsystem Type: 2 (NVM Subsystem) 00:32:50.572 Entry Flags: 00:32:50.572 Duplicate Returned Information: 0 00:32:50.572 Explicit Persistent Connection Support for Discovery: 0 00:32:50.572 Transport Requirements: 00:32:50.572 Secure Channel: Not Specified 00:32:50.572 Port ID: 1 (0x0001) 00:32:50.572 Controller ID: 65535 (0xffff) 00:32:50.572 Admin Max SQ Size: 32 00:32:50.572 Transport Service Identifier: 4420 00:32:50.572 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:50.572 Transport Address: 192.168.100.8 00:32:50.572 Transport Specific Address Subtype - RDMA 00:32:50.572 RDMA QP Service Type: 1 (Reliable Connected) 00:32:50.572 RDMA Provider Type: 1 (No provider specified) 00:32:50.572 RDMA CM Service: 1 (RDMA_CM) 00:32:50.572 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.832 get_feature(0x01) failed 00:32:50.832 get_feature(0x02) failed 00:32:50.832 get_feature(0x04) failed 00:32:50.832 ===================================================== 00:32:50.832 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.832 ===================================================== 00:32:50.832 Controller Capabilities/Features 00:32:50.832 ================================ 00:32:50.832 Vendor ID: 0000 00:32:50.832 Subsystem Vendor ID: 0000 00:32:50.832 Serial Number: 1ca631439e1f466291fa 00:32:50.832 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:50.832 Firmware Version: 6.8.9-20 00:32:50.832 Recommended Arb Burst: 6 00:32:50.832 IEEE OUI Identifier: 00 00 00 00:32:50.832 Multi-path I/O 00:32:50.832 May have multiple subsystem ports: Yes 00:32:50.832 May have multiple controllers: Yes 00:32:50.832 Associated with SR-IOV VF: No 00:32:50.832 Max Data Transfer Size: 1048576 00:32:50.832 Max Number of Namespaces: 1024 00:32:50.832 Max Number of I/O Queues: 128 00:32:50.832 NVMe Specification Version (VS): 1.3 00:32:50.832 NVMe Specification Version (Identify): 1.3 00:32:50.833 Maximum Queue Entries: 128 00:32:50.833 Contiguous Queues Required: No 00:32:50.833 Arbitration Mechanisms Supported 00:32:50.833 Weighted Round Robin: Not Supported 00:32:50.833 Vendor Specific: Not Supported 00:32:50.833 Reset Timeout: 7500 ms 00:32:50.833 Doorbell Stride: 4 bytes 00:32:50.833 NVM Subsystem Reset: Not Supported 00:32:50.833 Command Sets Supported 00:32:50.833 NVM Command Set: Supported 00:32:50.833 Boot Partition: Not Supported 00:32:50.833 Memory Page Size Minimum: 4096 bytes 00:32:50.833 Memory Page Size Maximum: 4096 bytes 00:32:50.833 Persistent Memory Region: Not Supported 00:32:50.833 Optional Asynchronous Events Supported 00:32:50.833 Namespace Attribute Notices: Supported 00:32:50.833 Firmware Activation Notices: Not Supported 00:32:50.833 ANA Change Notices: Supported 00:32:50.833 PLE Aggregate Log Change Notices: Not Supported 00:32:50.833 LBA Status Info Alert Notices: Not Supported 00:32:50.833 EGE Aggregate Log Change Notices: Not Supported 00:32:50.833 Normal NVM Subsystem Shutdown event: Not Supported 00:32:50.833 Zone Descriptor Change Notices: Not Supported 00:32:50.833 Discovery Log Change Notices: Not Supported 00:32:50.833 Controller Attributes 00:32:50.833 128-bit Host Identifier: Supported 00:32:50.833 Non-Operational Permissive Mode: Not Supported 00:32:50.833 NVM Sets: Not Supported 00:32:50.833 Read Recovery Levels: Not Supported 00:32:50.833 Endurance Groups: Not Supported 00:32:50.833 Predictable Latency Mode: Not Supported 00:32:50.833 Traffic Based Keep ALive: Supported 00:32:50.833 Namespace Granularity: Not Supported 00:32:50.833 SQ Associations: Not Supported 00:32:50.833 UUID List: Not Supported 00:32:50.833 Multi-Domain Subsystem: Not Supported 00:32:50.833 Fixed Capacity Management: Not Supported 00:32:50.833 Variable Capacity Management: Not Supported 00:32:50.833 Delete Endurance Group: Not Supported 00:32:50.833 Delete NVM Set: Not Supported 00:32:50.833 Extended LBA Formats Supported: Not Supported 00:32:50.833 Flexible Data Placement Supported: Not Supported 00:32:50.833 00:32:50.833 Controller Memory Buffer Support 00:32:50.833 ================================ 00:32:50.833 Supported: No 00:32:50.833 00:32:50.833 Persistent Memory Region Support 00:32:50.833 ================================ 00:32:50.833 Supported: No 00:32:50.833 00:32:50.833 Admin Command Set Attributes 00:32:50.833 ============================ 00:32:50.833 Security Send/Receive: Not Supported 00:32:50.833 Format NVM: Not Supported 00:32:50.833 Firmware Activate/Download: Not Supported 00:32:50.833 Namespace Management: Not Supported 00:32:50.833 Device Self-Test: Not Supported 00:32:50.833 Directives: Not Supported 00:32:50.833 NVMe-MI: Not Supported 00:32:50.833 Virtualization Management: Not Supported 00:32:50.833 Doorbell Buffer Config: Not Supported 00:32:50.833 Get LBA Status Capability: Not Supported 00:32:50.833 Command & Feature Lockdown Capability: Not Supported 00:32:50.833 Abort Command Limit: 4 00:32:50.833 Async Event Request Limit: 4 00:32:50.833 Number of Firmware Slots: N/A 00:32:50.833 Firmware Slot 1 Read-Only: N/A 00:32:50.833 Firmware Activation Without Reset: N/A 00:32:50.833 Multiple Update Detection Support: N/A 00:32:50.833 Firmware Update Granularity: No Information Provided 00:32:50.833 Per-Namespace SMART Log: Yes 00:32:50.833 Asymmetric Namespace Access Log Page: Supported 00:32:50.833 ANA Transition Time : 10 sec 00:32:50.833 00:32:50.833 Asymmetric Namespace Access Capabilities 00:32:50.833 ANA Optimized State : Supported 00:32:50.833 ANA Non-Optimized State : Supported 00:32:50.833 ANA Inaccessible State : Supported 00:32:50.833 ANA Persistent Loss State : Supported 00:32:50.833 ANA Change State : Supported 00:32:50.833 ANAGRPID is not changed : No 00:32:50.833 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:50.833 00:32:50.833 ANA Group Identifier Maximum : 128 00:32:50.833 Number of ANA Group Identifiers : 128 00:32:50.833 Max Number of Allowed Namespaces : 1024 00:32:50.833 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:50.833 Command Effects Log Page: Supported 00:32:50.833 Get Log Page Extended Data: Supported 00:32:50.833 Telemetry Log Pages: Not Supported 00:32:50.833 Persistent Event Log Pages: Not Supported 00:32:50.833 Supported Log Pages Log Page: May Support 00:32:50.833 Commands Supported & Effects Log Page: Not Supported 00:32:50.833 Feature Identifiers & Effects Log Page:May Support 00:32:50.833 NVMe-MI Commands & Effects Log Page: May Support 00:32:50.833 Data Area 4 for Telemetry Log: Not Supported 00:32:50.833 Error Log Page Entries Supported: 128 00:32:50.833 Keep Alive: Supported 00:32:50.833 Keep Alive Granularity: 1000 ms 00:32:50.833 00:32:50.833 NVM Command Set Attributes 00:32:50.833 ========================== 00:32:50.833 Submission Queue Entry Size 00:32:50.833 Max: 64 00:32:50.833 Min: 64 00:32:50.833 Completion Queue Entry Size 00:32:50.833 Max: 16 00:32:50.833 Min: 16 00:32:50.833 Number of Namespaces: 1024 00:32:50.833 Compare Command: Not Supported 00:32:50.833 Write Uncorrectable Command: Not Supported 00:32:50.833 Dataset Management Command: Supported 00:32:50.833 Write Zeroes Command: Supported 00:32:50.833 Set Features Save Field: Not Supported 00:32:50.833 Reservations: Not Supported 00:32:50.833 Timestamp: Not Supported 00:32:50.833 Copy: Not Supported 00:32:50.833 Volatile Write Cache: Present 00:32:50.833 Atomic Write Unit (Normal): 1 00:32:50.833 Atomic Write Unit (PFail): 1 00:32:50.833 Atomic Compare & Write Unit: 1 00:32:50.833 Fused Compare & Write: Not Supported 00:32:50.833 Scatter-Gather List 00:32:50.833 SGL Command Set: Supported 00:32:50.833 SGL Keyed: Supported 00:32:50.833 SGL Bit Bucket Descriptor: Not Supported 00:32:50.833 SGL Metadata Pointer: Not Supported 00:32:50.833 Oversized SGL: Not Supported 00:32:50.833 SGL Metadata Address: Not Supported 00:32:50.833 SGL Offset: Supported 00:32:50.833 Transport SGL Data Block: Not Supported 00:32:50.833 Replay Protected Memory Block: Not Supported 00:32:50.833 00:32:50.833 Firmware Slot Information 00:32:50.833 ========================= 00:32:50.833 Active slot: 0 00:32:50.833 00:32:50.833 Asymmetric Namespace Access 00:32:50.833 =========================== 00:32:50.833 Change Count : 0 00:32:50.833 Number of ANA Group Descriptors : 1 00:32:50.833 ANA Group Descriptor : 0 00:32:50.833 ANA Group ID : 1 00:32:50.833 Number of NSID Values : 1 00:32:50.833 Change Count : 0 00:32:50.833 ANA State : 1 00:32:50.833 Namespace Identifier : 1 00:32:50.833 00:32:50.833 Commands Supported and Effects 00:32:50.833 ============================== 00:32:50.833 Admin Commands 00:32:50.833 -------------- 00:32:50.833 Get Log Page (02h): Supported 00:32:50.833 Identify (06h): Supported 00:32:50.833 Abort (08h): Supported 00:32:50.833 Set Features (09h): Supported 00:32:50.833 Get Features (0Ah): Supported 00:32:50.833 Asynchronous Event Request (0Ch): Supported 00:32:50.833 Keep Alive (18h): Supported 00:32:50.833 I/O Commands 00:32:50.833 ------------ 00:32:50.833 Flush (00h): Supported 00:32:50.833 Write (01h): Supported LBA-Change 00:32:50.833 Read (02h): Supported 00:32:50.833 Write Zeroes (08h): Supported LBA-Change 00:32:50.833 Dataset Management (09h): Supported 00:32:50.833 00:32:50.833 Error Log 00:32:50.833 ========= 00:32:50.833 Entry: 0 00:32:50.833 Error Count: 0x3 00:32:50.833 Submission Queue Id: 0x0 00:32:50.833 Command Id: 0x5 00:32:50.833 Phase Bit: 0 00:32:50.833 Status Code: 0x2 00:32:50.833 Status Code Type: 0x0 00:32:50.833 Do Not Retry: 1 00:32:50.833 Error Location: 0x28 00:32:50.833 LBA: 0x0 00:32:50.833 Namespace: 0x0 00:32:50.833 Vendor Log Page: 0x0 00:32:50.833 ----------- 00:32:50.833 Entry: 1 00:32:50.833 Error Count: 0x2 00:32:50.833 Submission Queue Id: 0x0 00:32:50.833 Command Id: 0x5 00:32:50.833 Phase Bit: 0 00:32:50.833 Status Code: 0x2 00:32:50.833 Status Code Type: 0x0 00:32:50.833 Do Not Retry: 1 00:32:50.833 Error Location: 0x28 00:32:50.833 LBA: 0x0 00:32:50.833 Namespace: 0x0 00:32:50.833 Vendor Log Page: 0x0 00:32:50.833 ----------- 00:32:50.833 Entry: 2 00:32:50.833 Error Count: 0x1 00:32:50.833 Submission Queue Id: 0x0 00:32:50.833 Command Id: 0x0 00:32:50.833 Phase Bit: 0 00:32:50.833 Status Code: 0x2 00:32:50.833 Status Code Type: 0x0 00:32:50.833 Do Not Retry: 1 00:32:50.833 Error Location: 0x28 00:32:50.833 LBA: 0x0 00:32:50.833 Namespace: 0x0 00:32:50.833 Vendor Log Page: 0x0 00:32:50.833 00:32:50.833 Number of Queues 00:32:50.833 ================ 00:32:50.833 Number of I/O Submission Queues: 128 00:32:50.833 Number of I/O Completion Queues: 128 00:32:50.833 00:32:50.833 ZNS Specific Controller Data 00:32:50.833 ============================ 00:32:50.833 Zone Append Size Limit: 0 00:32:50.833 00:32:50.833 00:32:50.833 Active Namespaces 00:32:50.833 ================= 00:32:50.834 get_feature(0x05) failed 00:32:50.834 Namespace ID:1 00:32:50.834 Command Set Identifier: NVM (00h) 00:32:50.834 Deallocate: Supported 00:32:50.834 Deallocated/Unwritten Error: Not Supported 00:32:50.834 Deallocated Read Value: Unknown 00:32:50.834 Deallocate in Write Zeroes: Not Supported 00:32:50.834 Deallocated Guard Field: 0xFFFF 00:32:50.834 Flush: Supported 00:32:50.834 Reservation: Not Supported 00:32:50.834 Namespace Sharing Capabilities: Multiple Controllers 00:32:50.834 Size (in LBAs): 7814037168 (3726GiB) 00:32:50.834 Capacity (in LBAs): 7814037168 (3726GiB) 00:32:50.834 Utilization (in LBAs): 7814037168 (3726GiB) 00:32:50.834 UUID: 032223d4-1495-4fd6-ba5d-6b80c58cb5b4 00:32:50.834 Thin Provisioning: Not Supported 00:32:50.834 Per-NS Atomic Units: Yes 00:32:50.834 Atomic Boundary Size (Normal): 0 00:32:50.834 Atomic Boundary Size (PFail): 0 00:32:50.834 Atomic Boundary Offset: 0 00:32:50.834 NGUID/EUI64 Never Reused: No 00:32:50.834 ANA group ID: 1 00:32:50.834 Namespace Write Protected: No 00:32:50.834 Number of LBA Formats: 1 00:32:50.834 Current LBA Format: LBA Format #00 00:32:50.834 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:50.834 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:50.834 rmmod nvme_rdma 00:32:50.834 rmmod nvme_fabrics 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:50.834 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:51.093 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:32:51.093 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:32:51.093 02:14:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:32:53.628 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:53.628 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:56.920 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:56.920 00:32:56.920 real 0m18.065s 00:32:56.920 user 0m4.480s 00:32:56.920 sys 0m9.684s 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:56.920 ************************************ 00:32:56.920 END TEST nvmf_identify_kernel_target 00:32:56.920 ************************************ 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.920 ************************************ 00:32:56.920 START TEST nvmf_auth_host 00:32:56.920 ************************************ 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:32:56.920 * Looking for test storage... 00:32:56.920 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:32:56.920 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:57.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.181 --rc genhtml_branch_coverage=1 00:32:57.181 --rc genhtml_function_coverage=1 00:32:57.181 --rc genhtml_legend=1 00:32:57.181 --rc geninfo_all_blocks=1 00:32:57.181 --rc geninfo_unexecuted_blocks=1 00:32:57.181 00:32:57.181 ' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:57.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.181 --rc genhtml_branch_coverage=1 00:32:57.181 --rc genhtml_function_coverage=1 00:32:57.181 --rc genhtml_legend=1 00:32:57.181 --rc geninfo_all_blocks=1 00:32:57.181 --rc geninfo_unexecuted_blocks=1 00:32:57.181 00:32:57.181 ' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:57.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.181 --rc genhtml_branch_coverage=1 00:32:57.181 --rc genhtml_function_coverage=1 00:32:57.181 --rc genhtml_legend=1 00:32:57.181 --rc geninfo_all_blocks=1 00:32:57.181 --rc geninfo_unexecuted_blocks=1 00:32:57.181 00:32:57.181 ' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:57.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.181 --rc genhtml_branch_coverage=1 00:32:57.181 --rc genhtml_function_coverage=1 00:32:57.181 --rc genhtml_legend=1 00:32:57.181 --rc geninfo_all_blocks=1 00:32:57.181 --rc geninfo_unexecuted_blocks=1 00:32:57.181 00:32:57.181 ' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.181 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.182 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.182 02:14:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:33:03.747 Found 0000:18:00.0 (0x8086 - 0x159b) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:33:03.747 Found 0000:18:00.1 (0x8086 - 0x159b) 00:33:03.747 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@403 -- # modinfo irdma 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:33:03.748 Found net devices under 0000:18:00.0: cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:33:03.748 Found net devices under 0000:18:00.1: cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # rdma_device_init 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:33:03.748 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:03.748 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:33:03.748 altname enp24s0f0np0 00:33:03.748 altname ens785f0np0 00:33:03.748 inet 192.168.100.8/24 scope global cvl_0_0 00:33:03.748 valid_lft forever preferred_lft forever 00:33:03.748 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:33:03.748 valid_lft forever preferred_lft forever 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:33:03.748 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:03.748 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:33:03.748 altname enp24s0f1np1 00:33:03.748 altname ens785f1np1 00:33:03.748 inet 192.168.100.9/24 scope global cvl_0_1 00:33:03.748 valid_lft forever preferred_lft forever 00:33:03.748 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:33:03.748 valid_lft forever preferred_lft forever 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:33:03.748 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:33:03.749 192.168.100.9' 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:33:03.749 192.168.100.9' 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # head -n 1 00:33:03.749 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:04.007 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:33:04.008 192.168.100.9' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # tail -n +2 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # head -n 1 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3403384 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3403384 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3403384 ']' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.008 02:14:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6708eb4ba5ac427576c3d4b7855ee495 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.n2S 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6708eb4ba5ac427576c3d4b7855ee495 0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6708eb4ba5ac427576c3d4b7855ee495 0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6708eb4ba5ac427576c3d4b7855ee495 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.n2S 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.n2S 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.n2S 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a3c22577fb779c57ea4a3bc942254d5ed2f25b19708dff8d2df5400b5f7fb9fa 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.KJu 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a3c22577fb779c57ea4a3bc942254d5ed2f25b19708dff8d2df5400b5f7fb9fa 3 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a3c22577fb779c57ea4a3bc942254d5ed2f25b19708dff8d2df5400b5f7fb9fa 3 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a3c22577fb779c57ea4a3bc942254d5ed2f25b19708dff8d2df5400b5f7fb9fa 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.KJu 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.KJu 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.KJu 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3672948ad9c1ebf3c8817388bae325e8dbe6ef8cf24eea43 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.yvt 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3672948ad9c1ebf3c8817388bae325e8dbe6ef8cf24eea43 0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3672948ad9c1ebf3c8817388bae325e8dbe6ef8cf24eea43 0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3672948ad9c1ebf3c8817388bae325e8dbe6ef8cf24eea43 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.yvt 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.yvt 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yvt 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=074ec748e55164bbab38f974ddec7479bfbcff380714974a 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.H1x 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 074ec748e55164bbab38f974ddec7479bfbcff380714974a 2 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 074ec748e55164bbab38f974ddec7479bfbcff380714974a 2 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=074ec748e55164bbab38f974ddec7479bfbcff380714974a 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:04.944 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.H1x 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.H1x 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.H1x 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:05.203 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=886a1d3640408099ba9219ff5bdfb8bf 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.2EF 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 886a1d3640408099ba9219ff5bdfb8bf 1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 886a1d3640408099ba9219ff5bdfb8bf 1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=886a1d3640408099ba9219ff5bdfb8bf 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.2EF 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.2EF 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2EF 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8dd0a2fc306b79bc9b296ff69f0d754b 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.hAE 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8dd0a2fc306b79bc9b296ff69f0d754b 1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8dd0a2fc306b79bc9b296ff69f0d754b 1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8dd0a2fc306b79bc9b296ff69f0d754b 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.hAE 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.hAE 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hAE 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=45b86436fd171bd99fcc6d21d47d67db4b60e9fccf18aace 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.1Nt 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 45b86436fd171bd99fcc6d21d47d67db4b60e9fccf18aace 2 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 45b86436fd171bd99fcc6d21d47d67db4b60e9fccf18aace 2 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=45b86436fd171bd99fcc6d21d47d67db4b60e9fccf18aace 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.1Nt 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.1Nt 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1Nt 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:05.204 02:14:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a8cdab573a5e49a966bb751c686aeec7 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.zL4 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a8cdab573a5e49a966bb751c686aeec7 0 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a8cdab573a5e49a966bb751c686aeec7 0 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a8cdab573a5e49a966bb751c686aeec7 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:05.204 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.zL4 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.zL4 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zL4 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1cd0e9b6ab9e4348b4b0370b28db40bcc28cf42484e84c59017838b862b5ffed 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.C0c 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1cd0e9b6ab9e4348b4b0370b28db40bcc28cf42484e84c59017838b862b5ffed 3 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1cd0e9b6ab9e4348b4b0370b28db40bcc28cf42484e84c59017838b862b5ffed 3 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1cd0e9b6ab9e4348b4b0370b28db40bcc28cf42484e84c59017838b862b5ffed 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.C0c 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.C0c 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.C0c 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3403384 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3403384 ']' 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:05.463 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n2S 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.KJu ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KJu 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yvt 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.H1x ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H1x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2EF 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hAE ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hAE 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1Nt 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zL4 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zL4 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.C0c 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:05.722 02:14:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:33:09.006 Waiting for block devices as requested 00:33:09.006 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:09.006 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:09.006 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:09.264 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:09.264 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:09.264 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:09.523 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:09.523 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:09.523 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:09.782 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:09.782 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:09.782 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:10.039 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:10.039 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:10.039 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:10.297 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:10.297 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:11.233 No valid GPT data, bailing 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo rdma 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 --hostid=80e71deb-ee4e-e711-906e-0012795d9712 -a 192.168.100.8 -t rdma -s 4420 00:33:11.233 00:33:11.233 Discovery Log Number of Records 2, Generation counter 2 00:33:11.233 =====Discovery Log Entry 0====== 00:33:11.233 trtype: rdma 00:33:11.233 adrfam: ipv4 00:33:11.233 subtype: current discovery subsystem 00:33:11.233 treq: not specified, sq flow control disable supported 00:33:11.233 portid: 1 00:33:11.233 trsvcid: 4420 00:33:11.233 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:11.233 traddr: 192.168.100.8 00:33:11.233 eflags: none 00:33:11.233 rdma_prtype: not specified 00:33:11.233 rdma_qptype: connected 00:33:11.233 rdma_cms: rdma-cm 00:33:11.233 rdma_pkey: 0x0000 00:33:11.233 =====Discovery Log Entry 1====== 00:33:11.233 trtype: rdma 00:33:11.233 adrfam: ipv4 00:33:11.233 subtype: nvme subsystem 00:33:11.233 treq: not specified, sq flow control disable supported 00:33:11.233 portid: 1 00:33:11.233 trsvcid: 4420 00:33:11.233 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:11.233 traddr: 192.168.100.8 00:33:11.233 eflags: none 00:33:11.233 rdma_prtype: not specified 00:33:11.233 rdma_qptype: connected 00:33:11.233 rdma_cms: rdma-cm 00:33:11.233 rdma_pkey: 0x0000 00:33:11.233 02:14:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.233 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.234 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.492 nvme0n1 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.492 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.493 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.752 nvme0n1 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.752 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.753 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.012 nvme0n1 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.012 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.013 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.272 nvme0n1 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.272 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.273 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:12.273 02:14:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.273 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.532 nvme0n1 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.532 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.792 nvme0n1 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.792 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.051 nvme0n1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.051 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.310 nvme0n1 00:33:13.310 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.310 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.311 02:14:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.570 nvme0n1 00:33:13.570 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.571 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 nvme0n1 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.830 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.090 nvme0n1 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.090 02:14:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.349 nvme0n1 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:14.349 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.350 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.608 nvme0n1 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:14.608 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.867 nvme0n1 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.867 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.125 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.126 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.385 nvme0n1 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.385 02:14:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.385 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.644 nvme0n1 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:15.644 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:15.645 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:15.645 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:15.645 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.645 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.903 nvme0n1 00:33:15.903 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.163 02:14:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.422 nvme0n1 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.422 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:16.681 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.682 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.682 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.941 nvme0n1 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:16.941 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.942 02:14:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.517 nvme0n1 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.517 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.518 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.778 nvme0n1 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.778 02:14:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.347 nvme0n1 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.347 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:18.618 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.619 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 nvme0n1 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 02:14:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.760 nvme0n1 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.760 02:14:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.328 nvme0n1 00:33:20.328 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.328 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.329 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.898 nvme0n1 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.898 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.158 nvme0n1 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.158 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:21.419 02:14:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 nvme0n1 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.419 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.679 nvme0n1 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.679 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.939 nvme0n1 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.939 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.940 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.200 nvme0n1 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.200 02:14:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.460 nvme0n1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.460 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.720 nvme0n1 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:22.720 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.721 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.981 nvme0n1 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.981 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 nvme0n1 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.242 02:14:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.242 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.506 nvme0n1 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.506 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.766 nvme0n1 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.766 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.767 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:23.767 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.767 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:24.026 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.027 nvme0n1 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.027 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.286 02:14:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.547 nvme0n1 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.547 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.807 nvme0n1 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.807 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.067 nvme0n1 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.067 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.327 02:14:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.587 nvme0n1 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:25.587 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.588 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.170 nvme0n1 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.170 02:14:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.429 nvme0n1 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:26.429 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.430 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.000 nvme0n1 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.000 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.001 02:14:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.299 nvme0n1 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.299 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.583 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.871 nvme0n1 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.871 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:28.140 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.141 02:14:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.709 nvme0n1 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.709 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.278 nvme0n1 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.278 02:14:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:29.278 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.279 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.847 nvme0n1 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.847 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.848 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.107 02:14:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 nvme0n1 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 nvme0n1 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.676 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.936 nvme0n1 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.936 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.196 nvme0n1 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.196 02:14:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:31.456 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.457 nvme0n1 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.457 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.717 nvme0n1 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.717 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.977 nvme0n1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.977 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.237 nvme0n1 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.237 02:14:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.237 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.497 nvme0n1 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.497 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.757 nvme0n1 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.757 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.017 nvme0n1 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.017 02:14:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.276 nvme0n1 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.276 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.539 nvme0n1 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.539 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.799 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.059 nvme0n1 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.059 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.319 nvme0n1 00:33:34.319 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.319 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.319 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.319 02:14:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.319 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.579 nvme0n1 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:34.579 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.838 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.097 nvme0n1 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:35.097 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.098 02:14:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.666 nvme0n1 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:35.666 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.667 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.926 nvme0n1 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:35.926 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:35.927 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:36.186 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:36.186 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.186 02:14:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.445 nvme0n1 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.445 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.704 nvme0n1 00:33:36.704 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.704 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.704 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwOGViNGJhNWFjNDI3NTc2YzNkNGI3ODU1ZWU0OTWXdWuu: 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNjMjI1NzdmYjc3OWM1N2VhNGEzYmM5NDIyNTRkNWVkMmYyNWIxOTcwOGRmZjhkMmRmNTQwMGI1ZjdmYjlmYUH34BE=: 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.964 02:14:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.533 nvme0n1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.533 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.102 nvme0n1 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.102 02:14:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.670 nvme0n1 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.670 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.671 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.671 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.671 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.671 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDViODY0MzZmZDE3MWJkOTlmY2M2ZDIxZDQ3ZDY3ZGI0YjYwZTlmY2NmMThhYWNl44H+pw==: 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThjZGFiNTczYTVlNDlhOTY2YmI3NTFjNjg2YWVlYzfcmdtk: 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.930 02:14:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 nvme0n1 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWNkMGU5YjZhYjllNDM0OGI0YjAzNzBiMjhkYjQwYmNjMjhjZjQyNDg0ZTg0YzU5MDE3ODM4Yjg2MmI1ZmZlZOUXx5E=: 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.499 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.068 nvme0n1 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:40.068 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.069 request: 00:33:40.069 { 00:33:40.069 "name": "nvme0", 00:33:40.069 "trtype": "rdma", 00:33:40.069 "traddr": "192.168.100.8", 00:33:40.069 "adrfam": "ipv4", 00:33:40.069 "trsvcid": "4420", 00:33:40.069 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:40.069 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:40.069 "prchk_reftag": false, 00:33:40.069 "prchk_guard": false, 00:33:40.069 "hdgst": false, 00:33:40.069 "ddgst": false, 00:33:40.069 "allow_unrecognized_csi": false, 00:33:40.069 "method": "bdev_nvme_attach_controller", 00:33:40.069 "req_id": 1 00:33:40.069 } 00:33:40.069 Got JSON-RPC error response 00:33:40.069 response: 00:33:40.069 { 00:33:40.069 "code": -5, 00:33:40.069 "message": "Input/output error" 00:33:40.069 } 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:40.069 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.328 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.329 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:40.329 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.329 02:14:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.329 request: 00:33:40.329 { 00:33:40.329 "name": "nvme0", 00:33:40.329 "trtype": "rdma", 00:33:40.329 "traddr": "192.168.100.8", 00:33:40.329 "adrfam": "ipv4", 00:33:40.329 "trsvcid": "4420", 00:33:40.329 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:40.329 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:40.329 "prchk_reftag": false, 00:33:40.329 "prchk_guard": false, 00:33:40.329 "hdgst": false, 00:33:40.329 "ddgst": false, 00:33:40.329 "dhchap_key": "key2", 00:33:40.329 "allow_unrecognized_csi": false, 00:33:40.329 "method": "bdev_nvme_attach_controller", 00:33:40.329 "req_id": 1 00:33:40.329 } 00:33:40.329 Got JSON-RPC error response 00:33:40.329 response: 00:33:40.329 { 00:33:40.329 "code": -5, 00:33:40.329 "message": "Input/output error" 00:33:40.329 } 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.329 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.587 request: 00:33:40.587 { 00:33:40.587 "name": "nvme0", 00:33:40.587 "trtype": "rdma", 00:33:40.587 "traddr": "192.168.100.8", 00:33:40.587 "adrfam": "ipv4", 00:33:40.587 "trsvcid": "4420", 00:33:40.587 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:40.587 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:40.587 "prchk_reftag": false, 00:33:40.587 "prchk_guard": false, 00:33:40.587 "hdgst": false, 00:33:40.587 "ddgst": false, 00:33:40.587 "dhchap_key": "key1", 00:33:40.587 "dhchap_ctrlr_key": "ckey2", 00:33:40.587 "allow_unrecognized_csi": false, 00:33:40.587 "method": "bdev_nvme_attach_controller", 00:33:40.587 "req_id": 1 00:33:40.587 } 00:33:40.587 Got JSON-RPC error response 00:33:40.587 response: 00:33:40.587 { 00:33:40.587 "code": -5, 00:33:40.587 "message": "Input/output error" 00:33:40.587 } 00:33:40.587 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.588 nvme0n1 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.588 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.847 request: 00:33:40.847 { 00:33:40.847 "name": "nvme0", 00:33:40.847 "dhchap_key": "key1", 00:33:40.847 "dhchap_ctrlr_key": "ckey2", 00:33:40.847 "method": "bdev_nvme_set_keys", 00:33:40.847 "req_id": 1 00:33:40.847 } 00:33:40.847 Got JSON-RPC error response 00:33:40.847 response: 00:33:40.847 { 00:33:40.847 "code": -13, 00:33:40.847 "message": "Permission denied" 00:33:40.847 } 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:40.847 02:15:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:41.783 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.783 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:41.783 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.783 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.783 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.041 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:42.041 02:15:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY3Mjk0OGFkOWMxZWJmM2M4ODE3Mzg4YmFlMzI1ZThkYmU2ZWY4Y2YyNGVlYTQzWCxp/A==: 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: ]] 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDc0ZWM3NDhlNTUxNjRiYmFiMzhmOTc0ZGRlYzc0NzliZmJjZmYzODA3MTQ5NzRhSSwpEg==: 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.976 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.234 nvme0n1 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODg2YTFkMzY0MDQwODA5OWJhOTIxOWZmNWJkZmI4YmZob8xA: 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: ]] 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGRkMGEyZmMzMDZiNzliYzliMjk2ZmY2OWYwZDc1NGIxP5Ym: 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.234 request: 00:33:43.234 { 00:33:43.234 "name": "nvme0", 00:33:43.234 "dhchap_key": "key2", 00:33:43.234 "dhchap_ctrlr_key": "ckey1", 00:33:43.234 "method": "bdev_nvme_set_keys", 00:33:43.234 "req_id": 1 00:33:43.234 } 00:33:43.234 Got JSON-RPC error response 00:33:43.234 response: 00:33:43.234 { 00:33:43.234 "code": -13, 00:33:43.234 "message": "Permission denied" 00:33:43.234 } 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:43.234 02:15:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:44.167 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.168 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:44.168 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.168 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.168 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.435 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:44.435 02:15:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:45.372 02:15:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.372 02:15:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:45.372 rmmod nvme_rdma 00:33:45.372 rmmod nvme_fabrics 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3403384 ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3403384 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3403384 ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3403384 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3403384 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3403384' 00:33:45.372 killing process with pid 3403384 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3403384 00:33:45.372 02:15:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3403384 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:33:46.751 02:15:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:33:50.043 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:50.043 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:53.339 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:53.339 02:15:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.n2S /tmp/spdk.key-null.yvt /tmp/spdk.key-sha256.2EF /tmp/spdk.key-sha384.1Nt /tmp/spdk.key-sha512.C0c /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log 00:33:53.339 02:15:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:33:56.629 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:56.629 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:56.629 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:56.629 00:33:56.629 real 0m59.295s 00:33:56.629 user 0m52.161s 00:33:56.629 sys 0m15.936s 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.629 ************************************ 00:33:56.629 END TEST nvmf_auth_host 00:33:56.629 ************************************ 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.629 ************************************ 00:33:56.629 START TEST nvmf_bdevperf 00:33:56.629 ************************************ 00:33:56.629 02:15:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:56.629 * Looking for test storage... 00:33:56.629 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:56.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.629 --rc genhtml_branch_coverage=1 00:33:56.629 --rc genhtml_function_coverage=1 00:33:56.629 --rc genhtml_legend=1 00:33:56.629 --rc geninfo_all_blocks=1 00:33:56.629 --rc geninfo_unexecuted_blocks=1 00:33:56.629 00:33:56.629 ' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:56.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.629 --rc genhtml_branch_coverage=1 00:33:56.629 --rc genhtml_function_coverage=1 00:33:56.629 --rc genhtml_legend=1 00:33:56.629 --rc geninfo_all_blocks=1 00:33:56.629 --rc geninfo_unexecuted_blocks=1 00:33:56.629 00:33:56.629 ' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:56.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.629 --rc genhtml_branch_coverage=1 00:33:56.629 --rc genhtml_function_coverage=1 00:33:56.629 --rc genhtml_legend=1 00:33:56.629 --rc geninfo_all_blocks=1 00:33:56.629 --rc geninfo_unexecuted_blocks=1 00:33:56.629 00:33:56.629 ' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:56.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.629 --rc genhtml_branch_coverage=1 00:33:56.629 --rc genhtml_function_coverage=1 00:33:56.629 --rc genhtml_legend=1 00:33:56.629 --rc geninfo_all_blocks=1 00:33:56.629 --rc geninfo_unexecuted_blocks=1 00:33:56.629 00:33:56.629 ' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.629 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.630 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.630 02:15:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:34:03.198 Found 0000:18:00.0 (0x8086 - 0x159b) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:34:03.198 Found 0000:18:00.1 (0x8086 - 0x159b) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@403 -- # modinfo irdma 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.198 02:15:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:34:03.198 Found net devices under 0000:18:00.0: cvl_0_0 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:34:03.198 Found net devices under 0000:18:00.1: cvl_0_1 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # rdma_device_init 00:34:03.198 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:34:03.199 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:03.199 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:34:03.199 altname enp24s0f0np0 00:34:03.199 altname ens785f0np0 00:34:03.199 inet 192.168.100.8/24 scope global cvl_0_0 00:34:03.199 valid_lft forever preferred_lft forever 00:34:03.199 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:34:03.199 valid_lft forever preferred_lft forever 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:34:03.199 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:03.199 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:34:03.199 altname enp24s0f1np1 00:34:03.199 altname ens785f1np1 00:34:03.199 inet 192.168.100.9/24 scope global cvl_0_1 00:34:03.199 valid_lft forever preferred_lft forever 00:34:03.199 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:34:03.199 valid_lft forever preferred_lft forever 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:34:03.199 192.168.100.9' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:34:03.199 192.168.100.9' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # head -n 1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:34:03.199 192.168.100.9' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # tail -n +2 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # head -n 1 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3415885 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3415885 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3415885 ']' 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:03.199 02:15:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.199 [2024-10-09 02:15:22.358171] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:03.200 [2024-10-09 02:15:22.358283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.200 [2024-10-09 02:15:22.489503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:03.200 [2024-10-09 02:15:22.681722] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.200 [2024-10-09 02:15:22.681784] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.200 [2024-10-09 02:15:22.681797] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.200 [2024-10-09 02:15:22.681810] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.200 [2024-10-09 02:15:22.681820] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.200 [2024-10-09 02:15:22.683480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.200 [2024-10-09 02:15:22.683546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.200 [2024-10-09 02:15:22.683560] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.460 [2024-10-09 02:15:23.229671] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:34:03.460 [2024-10-09 02:15:23.239010] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:34:03.460 [2024-10-09 02:15:23.239044] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.460 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.720 Malloc0 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.720 [2024-10-09 02:15:23.345819] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:03.720 { 00:34:03.720 "params": { 00:34:03.720 "name": "Nvme$subsystem", 00:34:03.720 "trtype": "$TEST_TRANSPORT", 00:34:03.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.720 "adrfam": "ipv4", 00:34:03.720 "trsvcid": "$NVMF_PORT", 00:34:03.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.720 "hdgst": ${hdgst:-false}, 00:34:03.720 "ddgst": ${ddgst:-false} 00:34:03.720 }, 00:34:03.720 "method": "bdev_nvme_attach_controller" 00:34:03.720 } 00:34:03.720 EOF 00:34:03.720 )") 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:34:03.720 02:15:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:03.720 "params": { 00:34:03.720 "name": "Nvme1", 00:34:03.720 "trtype": "rdma", 00:34:03.720 "traddr": "192.168.100.8", 00:34:03.720 "adrfam": "ipv4", 00:34:03.720 "trsvcid": "4420", 00:34:03.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:03.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:03.720 "hdgst": false, 00:34:03.720 "ddgst": false 00:34:03.720 }, 00:34:03.720 "method": "bdev_nvme_attach_controller" 00:34:03.720 }' 00:34:03.720 [2024-10-09 02:15:23.434401] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:03.720 [2024-10-09 02:15:23.434496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416000 ] 00:34:03.979 [2024-10-09 02:15:23.564701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.979 [2024-10-09 02:15:23.761405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.548 Running I/O for 1 seconds... 00:34:05.486 15332.00 IOPS, 59.89 MiB/s 00:34:05.486 Latency(us) 00:34:05.486 [2024-10-09T00:15:25.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.486 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:05.486 Verification LBA range: start 0x0 length 0x4000 00:34:05.486 Nvme1n1 : 1.01 15343.48 59.94 0.00 0.00 8296.60 2735.42 18578.03 00:34:05.486 [2024-10-09T00:15:25.306Z] =================================================================================================================== 00:34:05.486 [2024-10-09T00:15:25.306Z] Total : 15343.48 59.94 0.00 0.00 8296.60 2735.42 18578.03 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3416358 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:06.420 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:06.420 { 00:34:06.420 "params": { 00:34:06.420 "name": "Nvme$subsystem", 00:34:06.420 "trtype": "$TEST_TRANSPORT", 00:34:06.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.420 "adrfam": "ipv4", 00:34:06.420 "trsvcid": "$NVMF_PORT", 00:34:06.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.420 "hdgst": ${hdgst:-false}, 00:34:06.420 "ddgst": ${ddgst:-false} 00:34:06.420 }, 00:34:06.420 "method": "bdev_nvme_attach_controller" 00:34:06.420 } 00:34:06.420 EOF 00:34:06.420 )") 00:34:06.678 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:34:06.678 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:34:06.678 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:34:06.678 02:15:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:06.678 "params": { 00:34:06.678 "name": "Nvme1", 00:34:06.678 "trtype": "rdma", 00:34:06.678 "traddr": "192.168.100.8", 00:34:06.678 "adrfam": "ipv4", 00:34:06.678 "trsvcid": "4420", 00:34:06.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.678 "hdgst": false, 00:34:06.678 "ddgst": false 00:34:06.678 }, 00:34:06.678 "method": "bdev_nvme_attach_controller" 00:34:06.678 }' 00:34:06.678 [2024-10-09 02:15:26.320900] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:06.678 [2024-10-09 02:15:26.321004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416358 ] 00:34:06.678 [2024-10-09 02:15:26.453372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.936 [2024-10-09 02:15:26.660308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.503 Running I/O for 15 seconds... 00:34:09.374 15360.00 IOPS, 60.00 MiB/s [2024-10-09T00:15:29.453Z] 15473.50 IOPS, 60.44 MiB/s [2024-10-09T00:15:29.453Z] 02:15:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3415885 00:34:09.633 02:15:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:10.203 [2024-10-09 02:15:29.815570] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:10.203 [2024-10-09 02:15:29.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.815975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.815988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.203 [2024-10-09 02:15:29.816427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.203 [2024-10-09 02:15:29.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.816985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.816998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.204 [2024-10-09 02:15:29.817508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.204 [2024-10-09 02:15:29.817521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.817975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.817989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.205 [2024-10-09 02:15:29.818589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.205 [2024-10-09 02:15:29.818600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.206 [2024-10-09 02:15:29.818776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000074ff000 len:0x1000 key:0xb73a6248 00:34:10.206 [2024-10-09 02:15:29.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007501000 len:0x1000 key:0xb73a6248 00:34:10.206 [2024-10-09 02:15:29.818828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.818842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007503000 len:0x1000 key:0xb73a6248 00:34:10.206 [2024-10-09 02:15:29.818854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.819392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.206 [2024-10-09 02:15:29.819417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.206 [2024-10-09 02:15:29.819430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4120 len:8 PRP1 0x0 PRP2 0x0 00:34:10.206 [2024-10-09 02:15:29.819444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.819628] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a1e5c00 was disconnected and freed. reset controller. 00:34:10.206 [2024-10-09 02:15:29.819665] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:10.206 [2024-10-09 02:15:29.819684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.206 [2024-10-09 02:15:29.819699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.819713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.206 [2024-10-09 02:15:29.819725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.819738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.206 [2024-10-09 02:15:29.819750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.819763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.206 [2024-10-09 02:15:29.819775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.206 [2024-10-09 02:15:29.845912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:10.206 [2024-10-09 02:15:29.845938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.206 [2024-10-09 02:15:29.845951] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:10.206 [2024-10-09 02:15:29.848857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.206 [2024-10-09 02:15:29.852251] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:10.206 [2024-10-09 02:15:29.852282] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:10.206 [2024-10-09 02:15:29.852293] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff7c0 00:34:11.400 11093.33 IOPS, 43.33 MiB/s [2024-10-09T00:15:31.220Z] [2024-10-09 02:15:30.855334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:11.400 [2024-10-09 02:15:30.855382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.400 [2024-10-09 02:15:30.855617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.400 [2024-10-09 02:15:30.855637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.400 [2024-10-09 02:15:30.855651] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:11.400 [2024-10-09 02:15:30.858617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.400 [2024-10-09 02:15:30.862617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.400 [2024-10-09 02:15:30.866361] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:11.400 [2024-10-09 02:15:30.866389] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:11.400 [2024-10-09 02:15:30.866401] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff7c0 00:34:12.335 8320.00 IOPS, 32.50 MiB/s [2024-10-09T00:15:32.155Z] [2024-10-09 02:15:31.869449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:12.335 [2024-10-09 02:15:31.869488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:12.335 [2024-10-09 02:15:31.869717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:12.335 [2024-10-09 02:15:31.869733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:12.335 [2024-10-09 02:15:31.869746] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:12.335 [2024-10-09 02:15:31.872664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:12.335 [2024-10-09 02:15:31.878545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:12.335 [2024-10-09 02:15:31.882332] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:12.335 [2024-10-09 02:15:31.882408] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:12.335 [2024-10-09 02:15:31.882444] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff7c0 00:34:12.594 6656.00 IOPS, 26.00 MiB/s [2024-10-09T00:15:32.414Z] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3415885 Killed "${NVMF_APP[@]}" "$@" 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3417218 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3417218 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3417218 ']' 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:12.594 02:15:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.594 [2024-10-09 02:15:32.341959] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:12.594 [2024-10-09 02:15:32.342063] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.853 [2024-10-09 02:15:32.478317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:13.111 [2024-10-09 02:15:32.678028] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.111 [2024-10-09 02:15:32.678080] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.111 [2024-10-09 02:15:32.678093] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.111 [2024-10-09 02:15:32.678107] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.111 [2024-10-09 02:15:32.678117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.111 [2024-10-09 02:15:32.679727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.111 [2024-10-09 02:15:32.679785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.111 [2024-10-09 02:15:32.679794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:13.111 [2024-10-09 02:15:32.885461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:13.111 [2024-10-09 02:15:32.885516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.111 [2024-10-09 02:15:32.885728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.111 [2024-10-09 02:15:32.885744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.111 [2024-10-09 02:15:32.885758] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:13.111 [2024-10-09 02:15:32.888822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.111 [2024-10-09 02:15:32.893138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.111 [2024-10-09 02:15:32.896678] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:13.111 [2024-10-09 02:15:32.896712] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:13.111 [2024-10-09 02:15:32.896724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff7c0 00:34:13.370 5546.67 IOPS, 21.67 MiB/s [2024-10-09T00:15:33.190Z] 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:13.370 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:34:13.370 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:13.370 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:13.370 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 [2024-10-09 02:15:33.223466] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000028fc0/0x617000007c40) succeed. 00:34:13.629 [2024-10-09 02:15:33.232964] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029140/0x617000007fc0) succeed. 00:34:13.629 [2024-10-09 02:15:33.232999] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 Malloc0 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.629 [2024-10-09 02:15:33.342223] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.629 02:15:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3416358 00:34:14.193 [2024-10-09 02:15:33.899693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:14.193 [2024-10-09 02:15:33.899739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.193 [2024-10-09 02:15:33.899940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.193 [2024-10-09 02:15:33.899956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.193 [2024-10-09 02:15:33.899969] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:14.193 [2024-10-09 02:15:33.902995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.193 [2024-10-09 02:15:33.910484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.193 [2024-10-09 02:15:33.956584] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:15.463 5083.14 IOPS, 19.86 MiB/s [2024-10-09T00:15:36.253Z] 6386.25 IOPS, 24.95 MiB/s [2024-10-09T00:15:37.187Z] 7403.11 IOPS, 28.92 MiB/s [2024-10-09T00:15:38.120Z] 8216.80 IOPS, 32.10 MiB/s [2024-10-09T00:15:39.494Z] 8882.73 IOPS, 34.70 MiB/s [2024-10-09T00:15:40.427Z] 9436.67 IOPS, 36.86 MiB/s [2024-10-09T00:15:41.362Z] 9905.85 IOPS, 38.69 MiB/s [2024-10-09T00:15:42.297Z] 10308.21 IOPS, 40.27 MiB/s [2024-10-09T00:15:42.297Z] 10658.13 IOPS, 41.63 MiB/s 00:34:22.477 Latency(us) 00:34:22.477 [2024-10-09T00:15:42.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.477 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:22.477 Verification LBA range: start 0x0 length 0x4000 00:34:22.477 Nvme1n1 : 15.01 10659.36 41.64 12371.05 0.00 5535.73 740.84 612733.11 00:34:22.477 [2024-10-09T00:15:42.297Z] =================================================================================================================== 00:34:22.477 [2024-10-09T00:15:42.297Z] Total : 10659.36 41.64 12371.05 0.00 5535.73 740.84 612733.11 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:23.852 rmmod nvme_rdma 00:34:23.852 rmmod nvme_fabrics 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3417218 ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3417218 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3417218 ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3417218 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417218 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417218' 00:34:23.852 killing process with pid 3417218 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3417218 00:34:23.852 02:15:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3417218 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:34:25.229 00:34:25.229 real 0m28.883s 00:34:25.229 user 1m16.624s 00:34:25.229 sys 0m6.538s 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.229 ************************************ 00:34:25.229 END TEST nvmf_bdevperf 00:34:25.229 ************************************ 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.229 ************************************ 00:34:25.229 START TEST nvmf_target_disconnect 00:34:25.229 ************************************ 00:34:25.229 02:15:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:34:25.229 * Looking for test storage... 00:34:25.229 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.488 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.489 --rc genhtml_branch_coverage=1 00:34:25.489 --rc genhtml_function_coverage=1 00:34:25.489 --rc genhtml_legend=1 00:34:25.489 --rc geninfo_all_blocks=1 00:34:25.489 --rc geninfo_unexecuted_blocks=1 00:34:25.489 00:34:25.489 ' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.489 --rc genhtml_branch_coverage=1 00:34:25.489 --rc genhtml_function_coverage=1 00:34:25.489 --rc genhtml_legend=1 00:34:25.489 --rc geninfo_all_blocks=1 00:34:25.489 --rc geninfo_unexecuted_blocks=1 00:34:25.489 00:34:25.489 ' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.489 --rc genhtml_branch_coverage=1 00:34:25.489 --rc genhtml_function_coverage=1 00:34:25.489 --rc genhtml_legend=1 00:34:25.489 --rc geninfo_all_blocks=1 00:34:25.489 --rc geninfo_unexecuted_blocks=1 00:34:25.489 00:34:25.489 ' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.489 --rc genhtml_branch_coverage=1 00:34:25.489 --rc genhtml_function_coverage=1 00:34:25.489 --rc genhtml_legend=1 00:34:25.489 --rc geninfo_all_blocks=1 00:34:25.489 --rc geninfo_unexecuted_blocks=1 00:34:25.489 00:34:25.489 ' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:25.489 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.489 02:15:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:34:32.055 Found 0000:18:00.0 (0x8086 - 0x159b) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:34:32.055 Found 0000:18:00.1 (0x8086 - 0x159b) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@403 -- # modinfo irdma 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:34:32.055 Found net devices under 0000:18:00.0: cvl_0_0 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:34:32.055 Found net devices under 0000:18:00.1: cvl_0_1 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:32.055 02:15:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:34:32.055 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:34:32.056 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:32.056 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:34:32.056 altname enp24s0f0np0 00:34:32.056 altname ens785f0np0 00:34:32.056 inet 192.168.100.8/24 scope global cvl_0_0 00:34:32.056 valid_lft forever preferred_lft forever 00:34:32.056 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:34:32.056 valid_lft forever preferred_lft forever 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:34:32.056 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:32.056 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:34:32.056 altname enp24s0f1np1 00:34:32.056 altname ens785f1np1 00:34:32.056 inet 192.168.100.9/24 scope global cvl_0_1 00:34:32.056 valid_lft forever preferred_lft forever 00:34:32.056 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:34:32.056 valid_lft forever preferred_lft forever 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:34:32.056 192.168.100.9' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:34:32.056 192.168.100.9' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:34:32.056 192.168.100.9' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.056 ************************************ 00:34:32.056 START TEST nvmf_target_disconnect_tc1 00:34:32.056 ************************************ 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect ]] 00:34:32.056 02:15:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:32.056 [2024-10-09 02:15:51.484341] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:32.056 [2024-10-09 02:15:51.484558] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:32.056 [2024-10-09 02:15:51.484609] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6e80 00:34:32.994 [2024-10-09 02:15:52.487589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:32.994 [2024-10-09 02:15:52.487644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:32.994 [2024-10-09 02:15:52.487661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:34:32.994 [2024-10-09 02:15:52.487734] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:32.994 [2024-10-09 02:15:52.487751] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:32.994 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:34:32.994 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:32.994 Initializing NVMe Controllers 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:32.994 00:34:32.994 real 0m1.310s 00:34:32.994 user 0m0.908s 00:34:32.994 sys 0m0.396s 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:32.994 ************************************ 00:34:32.994 END TEST nvmf_target_disconnect_tc1 00:34:32.994 ************************************ 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.994 ************************************ 00:34:32.994 START TEST nvmf_target_disconnect_tc2 00:34:32.994 ************************************ 00:34:32.994 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3421831 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3421831 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3421831 ']' 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.995 02:15:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:32.995 [2024-10-09 02:15:52.775938] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:32.995 [2024-10-09 02:15:52.776056] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.254 [2024-10-09 02:15:52.917127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.512 [2024-10-09 02:15:53.111842] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.513 [2024-10-09 02:15:53.111896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.513 [2024-10-09 02:15:53.111924] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.513 [2024-10-09 02:15:53.111938] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.513 [2024-10-09 02:15:53.111948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.513 [2024-10-09 02:15:53.114451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:33.513 [2024-10-09 02:15:53.114555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:33.513 [2024-10-09 02:15:53.114604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:33.513 [2024-10-09 02:15:53.114626] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:33.770 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:33.770 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:33.770 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:33.770 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:33.770 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.028 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.028 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:34.028 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.028 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.028 Malloc0 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.029 [2024-10-09 02:15:53.707017] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029a40/0x617000007c40) succeed. 00:34:34.029 [2024-10-09 02:15:53.717188] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029bc0/0x617000007fc0) succeed. 00:34:34.029 [2024-10-09 02:15:53.717221] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.029 [2024-10-09 02:15:53.745850] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3421956 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:34.029 02:15:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:36.557 02:15:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3421831 00:34:36.557 02:15:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:36.817 [2024-10-09 02:15:56.567578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Write completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 Read completed with error (sct=0, sc=8) 00:34:36.817 starting I/O failed 00:34:36.817 [2024-10-09 02:15:56.568782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.817 [2024-10-09 02:15:56.570863] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:36.817 [2024-10-09 02:15:56.570894] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:36.817 [2024-10-09 02:15:56.570909] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:38.196 [2024-10-09 02:15:57.573855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-10-09 02:15:57.575829] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:38.196 [2024-10-09 02:15:57.575860] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:38.196 [2024-10-09 02:15:57.575875] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:38.196 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3421831 Killed "${NVMF_APP[@]}" "$@" 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3422487 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3422487 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3422487 ']' 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.196 02:15:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 [2024-10-09 02:15:57.871301] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:38.196 [2024-10-09 02:15:57.871409] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.454 [2024-10-09 02:15:58.025583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.454 [2024-10-09 02:15:58.230912] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.454 [2024-10-09 02:15:58.230973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.454 [2024-10-09 02:15:58.231003] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.454 [2024-10-09 02:15:58.231019] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.454 [2024-10-09 02:15:58.231030] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.454 [2024-10-09 02:15:58.233558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:38.454 [2024-10-09 02:15:58.233694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:38.454 [2024-10-09 02:15:58.233629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:38.454 [2024-10-09 02:15:58.233716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:39.019 [2024-10-09 02:15:58.578718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.019 qpair failed and we were unable to recover it. 00:34:39.019 [2024-10-09 02:15:58.580610] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:39.019 [2024-10-09 02:15:58.580638] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:39.019 [2024-10-09 02:15:58.580652] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.019 Malloc0 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.019 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.019 [2024-10-09 02:15:58.825805] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029a40/0x617000007c40) succeed. 00:34:39.019 [2024-10-09 02:15:58.836007] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029bc0/0x617000007fc0) succeed. 00:34:39.019 [2024-10-09 02:15:58.836052] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.277 [2024-10-09 02:15:58.873304] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.277 02:15:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3421956 00:34:39.843 [2024-10-09 02:15:59.583570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.843 qpair failed and we were unable to recover it. 00:34:39.843 [2024-10-09 02:15:59.592730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.843 [2024-10-09 02:15:59.592865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.843 [2024-10-09 02:15:59.592899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.843 [2024-10-09 02:15:59.592920] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.843 [2024-10-09 02:15:59.592933] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:39.843 [2024-10-09 02:15:59.600163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.843 qpair failed and we were unable to recover it. 00:34:39.843 [2024-10-09 02:15:59.612546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.843 [2024-10-09 02:15:59.612634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.843 [2024-10-09 02:15:59.612665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.843 [2024-10-09 02:15:59.612680] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.843 [2024-10-09 02:15:59.612696] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:39.843 [2024-10-09 02:15:59.620114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.843 qpair failed and we were unable to recover it. 00:34:39.843 [2024-10-09 02:15:59.632517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.843 [2024-10-09 02:15:59.632611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.843 [2024-10-09 02:15:59.632637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.843 [2024-10-09 02:15:59.632657] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.843 [2024-10-09 02:15:59.632670] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:39.843 [2024-10-09 02:15:59.640216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.843 qpair failed and we were unable to recover it. 00:34:39.843 [2024-10-09 02:15:59.652613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.843 [2024-10-09 02:15:59.652696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.843 [2024-10-09 02:15:59.652727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.843 [2024-10-09 02:15:59.652745] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.843 [2024-10-09 02:15:59.652762] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.101 [2024-10-09 02:15:59.660148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.101 qpair failed and we were unable to recover it. 00:34:40.101 [2024-10-09 02:15:59.672605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.101 [2024-10-09 02:15:59.672692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.101 [2024-10-09 02:15:59.672717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.101 [2024-10-09 02:15:59.672739] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.101 [2024-10-09 02:15:59.672751] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.101 [2024-10-09 02:15:59.680337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.101 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.693466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.693554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.693584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.693600] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.693615] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.700353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.712751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.712837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.712863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.712880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.712892] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.720517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.732718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.732801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.732829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.732844] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.732858] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.740532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.752839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.752919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.752948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.752965] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.752977] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.760563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.772860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.772939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.772967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.772981] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.772995] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.780684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.792995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.793074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.793101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.793118] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.793129] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.800867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.812977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.813051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.813082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.813097] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.813112] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.820786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.833067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.833144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.833170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.833186] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.833198] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.840855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.855178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.855267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.855298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.855313] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.855330] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.860932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.873217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.873292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.873319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.873335] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.873347] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.880926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.893299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.893372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.893401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.893415] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.893429] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.102 [2024-10-09 02:15:59.900986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.102 qpair failed and we were unable to recover it. 00:34:40.102 [2024-10-09 02:15:59.913323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.102 [2024-10-09 02:15:59.913405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.102 [2024-10-09 02:15:59.913431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.102 [2024-10-09 02:15:59.913448] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.102 [2024-10-09 02:15:59.913460] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.361 [2024-10-09 02:15:59.921124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.361 qpair failed and we were unable to recover it. 00:34:40.361 [2024-10-09 02:15:59.933332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.361 [2024-10-09 02:15:59.933413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.361 [2024-10-09 02:15:59.933442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.361 [2024-10-09 02:15:59.933456] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.361 [2024-10-09 02:15:59.933470] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.361 [2024-10-09 02:15:59.941193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.361 qpair failed and we were unable to recover it. 00:34:40.361 [2024-10-09 02:15:59.953414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.361 [2024-10-09 02:15:59.953500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.361 [2024-10-09 02:15:59.953526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.361 [2024-10-09 02:15:59.953546] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.361 [2024-10-09 02:15:59.953558] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.361 [2024-10-09 02:15:59.961143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.361 qpair failed and we were unable to recover it. 00:34:40.361 [2024-10-09 02:15:59.973489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.361 [2024-10-09 02:15:59.973570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.361 [2024-10-09 02:15:59.973599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.361 [2024-10-09 02:15:59.973613] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.361 [2024-10-09 02:15:59.973627] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.361 [2024-10-09 02:15:59.981251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.361 qpair failed and we were unable to recover it. 00:34:40.361 [2024-10-09 02:15:59.993514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.361 [2024-10-09 02:15:59.993594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.361 [2024-10-09 02:15:59.993620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.361 [2024-10-09 02:15:59.993639] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.361 [2024-10-09 02:15:59.993651] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.361 [2024-10-09 02:16:00.001345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.361 qpair failed and we were unable to recover it. 00:34:40.361 [2024-10-09 02:16:00.013978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.014077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.014109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.014125] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.014143] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.021370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.033677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.033756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.033783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.033802] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.033815] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.041517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.053708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.053788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.053818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.053833] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.053848] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.061455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.073791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.073883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.073910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.073927] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.073940] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.081570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.093826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.093905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.093935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.093950] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.093964] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.101557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.113865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.113941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.113966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.113983] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.113995] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.121615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.133941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.134014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.134045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.134059] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.134074] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.141683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.153915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.153996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.154021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.154037] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.154049] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.362 [2024-10-09 02:16:00.161691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.362 qpair failed and we were unable to recover it. 00:34:40.362 [2024-10-09 02:16:00.173999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.362 [2024-10-09 02:16:00.174073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.362 [2024-10-09 02:16:00.174101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.362 [2024-10-09 02:16:00.174115] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.362 [2024-10-09 02:16:00.174132] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.621 [2024-10-09 02:16:00.183080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.621 qpair failed and we were unable to recover it. 00:34:40.621 [2024-10-09 02:16:00.194234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.621 [2024-10-09 02:16:00.194314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.621 [2024-10-09 02:16:00.194343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.621 [2024-10-09 02:16:00.194360] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.621 [2024-10-09 02:16:00.194371] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.621 [2024-10-09 02:16:00.201864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.621 qpair failed and we were unable to recover it. 00:34:40.621 [2024-10-09 02:16:00.214156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.621 [2024-10-09 02:16:00.214229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.621 [2024-10-09 02:16:00.214257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.621 [2024-10-09 02:16:00.214271] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.621 [2024-10-09 02:16:00.214285] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.621 [2024-10-09 02:16:00.221985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.621 qpair failed and we were unable to recover it. 00:34:40.621 [2024-10-09 02:16:00.234298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.621 [2024-10-09 02:16:00.234385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.621 [2024-10-09 02:16:00.234410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.621 [2024-10-09 02:16:00.234427] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.621 [2024-10-09 02:16:00.234439] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.621 [2024-10-09 02:16:00.242020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.621 qpair failed and we were unable to recover it. 00:34:40.621 [2024-10-09 02:16:00.254308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.621 [2024-10-09 02:16:00.254381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.254409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.254424] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.254438] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.262048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.274358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.274446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.274470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.274488] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.274499] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.282191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.294368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.294443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.294471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.294485] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.294499] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.302157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.314500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.314597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.314622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.314643] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.314654] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.322314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.334498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.334578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.334606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.334620] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.334635] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.344313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.354592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.354688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.354713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.354730] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.354742] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.362398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.374713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.374796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.374829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.374844] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.374859] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.382466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.394792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.394872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.394897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.394914] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.394926] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.402503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.414821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.414904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.414933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.414947] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.414962] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.622 [2024-10-09 02:16:00.422503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.622 qpair failed and we were unable to recover it. 00:34:40.622 [2024-10-09 02:16:00.434884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.622 [2024-10-09 02:16:00.434965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.622 [2024-10-09 02:16:00.434991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.622 [2024-10-09 02:16:00.435008] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.622 [2024-10-09 02:16:00.435020] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.881 [2024-10-09 02:16:00.442611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.881 qpair failed and we were unable to recover it. 00:34:40.881 [2024-10-09 02:16:00.454952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.881 [2024-10-09 02:16:00.455027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.881 [2024-10-09 02:16:00.455064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.881 [2024-10-09 02:16:00.455079] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.881 [2024-10-09 02:16:00.455097] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.881 [2024-10-09 02:16:00.462700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.881 qpair failed and we were unable to recover it. 00:34:40.881 [2024-10-09 02:16:00.474945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.881 [2024-10-09 02:16:00.475030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.881 [2024-10-09 02:16:00.475055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.881 [2024-10-09 02:16:00.475071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.475083] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.482741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.495015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.495090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.495118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.495132] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.495149] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.505644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.515043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.515124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.515149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.515166] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.515178] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.522801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.535052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.535138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.535166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.535180] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.535194] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.542922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.555301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.555383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.555409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.555426] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.555437] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.563012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.575246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.575323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.575350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.575364] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.575379] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.583011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.595325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.595409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.595435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.595451] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.595463] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.602945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.615396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.615467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.615495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.615509] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.615523] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.623117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.635342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.635414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.635439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.635462] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.635474] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.643150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.655531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.655602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.655630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.655644] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.655659] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.666023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.675512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.675591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.675616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.675633] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.675644] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:40.882 [2024-10-09 02:16:00.683297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.882 qpair failed and we were unable to recover it. 00:34:40.882 [2024-10-09 02:16:00.695489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.882 [2024-10-09 02:16:00.695570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.882 [2024-10-09 02:16:00.695598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.882 [2024-10-09 02:16:00.695613] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.882 [2024-10-09 02:16:00.695627] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.141 [2024-10-09 02:16:00.703361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.715659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.715740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.715765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.715782] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.715794] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.723400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.735684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.735769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.735797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.735811] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.735825] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.743425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.755877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.755946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.755971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.755988] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.756000] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.763534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.775783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.775858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.775889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.775903] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.775917] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.783691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.796009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.796082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.796107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.796123] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.796134] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.803687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.815975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.816043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.816074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.816089] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.816105] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.826582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.836052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.836118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.836144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.836158] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.836169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.843788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.856098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.856165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.856192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.856207] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.856218] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.863912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.876234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.876307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.876332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.876346] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.876358] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.883905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.896179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.896249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.896275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.896289] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.896305] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.903948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.916260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.916341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.916366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.916382] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.916393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.924083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.936339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.936407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.936433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.936447] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.936459] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.142 [2024-10-09 02:16:00.944067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-10-09 02:16:00.956463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.142 [2024-10-09 02:16:00.956547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.142 [2024-10-09 02:16:00.956573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.142 [2024-10-09 02:16:00.956587] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.142 [2024-10-09 02:16:00.956599] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:00.964193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:00.976502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:00.976575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:00.976600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:00.976614] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:00.976626] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:00.986889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:00.996579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:00.996661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:00.996686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:00.996700] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:00.996712] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.004279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.016558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.016627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.016653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.016667] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.016679] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.024329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.036702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.036777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.036802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.036817] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.036829] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.044382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.056732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.056798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.056824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.056838] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.056850] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.064487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.076785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.076857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.076883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.076901] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.076913] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.084507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.096745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.096814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.096840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.096854] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.096866] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.104640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.116859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.116932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.116958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.116972] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.116984] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.124652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.136989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.137068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.137094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.137108] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.137120] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.148242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.157021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.157095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.402 [2024-10-09 02:16:01.157121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.402 [2024-10-09 02:16:01.157135] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.402 [2024-10-09 02:16:01.157147] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.402 [2024-10-09 02:16:01.164707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.402 qpair failed and we were unable to recover it. 00:34:41.402 [2024-10-09 02:16:01.177028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.402 [2024-10-09 02:16:01.177097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.403 [2024-10-09 02:16:01.177123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.403 [2024-10-09 02:16:01.177137] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.403 [2024-10-09 02:16:01.177149] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.403 [2024-10-09 02:16:01.184726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.403 qpair failed and we were unable to recover it. 00:34:41.403 [2024-10-09 02:16:01.197085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.403 [2024-10-09 02:16:01.197164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.403 [2024-10-09 02:16:01.197189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.403 [2024-10-09 02:16:01.197203] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.403 [2024-10-09 02:16:01.197215] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.403 [2024-10-09 02:16:01.204824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.403 qpair failed and we were unable to recover it. 00:34:41.403 [2024-10-09 02:16:01.217174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.403 [2024-10-09 02:16:01.217249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.403 [2024-10-09 02:16:01.217275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.403 [2024-10-09 02:16:01.217291] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.403 [2024-10-09 02:16:01.217302] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.224882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.237188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.237267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.237291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.237306] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.237318] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.244972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.257222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.257291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.257321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.257336] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.257348] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.264992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.277377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.277446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.277471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.277485] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.277496] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.285049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.297391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.297460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.297486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.297501] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.297513] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.306454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.317429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.317506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.317531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.317553] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.317565] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.325097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.337487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.337564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.337589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.337604] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.337616] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.345219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.357529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.357600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.357625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.357640] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.357651] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.365274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.377650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.662 [2024-10-09 02:16:01.377717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.662 [2024-10-09 02:16:01.377741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.662 [2024-10-09 02:16:01.377756] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.662 [2024-10-09 02:16:01.377767] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.662 [2024-10-09 02:16:01.385299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.662 qpair failed and we were unable to recover it. 00:34:41.662 [2024-10-09 02:16:01.397626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.663 [2024-10-09 02:16:01.397694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.663 [2024-10-09 02:16:01.397719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.663 [2024-10-09 02:16:01.397733] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.663 [2024-10-09 02:16:01.397745] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.663 [2024-10-09 02:16:01.405409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.663 qpair failed and we were unable to recover it. 00:34:41.663 [2024-10-09 02:16:01.417693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.663 [2024-10-09 02:16:01.417760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.663 [2024-10-09 02:16:01.417785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.663 [2024-10-09 02:16:01.417799] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.663 [2024-10-09 02:16:01.417811] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.663 [2024-10-09 02:16:01.425438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.663 qpair failed and we were unable to recover it. 00:34:41.663 [2024-10-09 02:16:01.437783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.663 [2024-10-09 02:16:01.437862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.663 [2024-10-09 02:16:01.437887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.663 [2024-10-09 02:16:01.437902] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.663 [2024-10-09 02:16:01.437914] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.663 [2024-10-09 02:16:01.445513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.663 qpair failed and we were unable to recover it. 00:34:41.663 [2024-10-09 02:16:01.457874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.663 [2024-10-09 02:16:01.457941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.663 [2024-10-09 02:16:01.457967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.663 [2024-10-09 02:16:01.457981] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.663 [2024-10-09 02:16:01.457993] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.663 [2024-10-09 02:16:01.465566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.663 qpair failed and we were unable to recover it. 00:34:41.663 [2024-10-09 02:16:01.477947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.663 [2024-10-09 02:16:01.478028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.663 [2024-10-09 02:16:01.478054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.663 [2024-10-09 02:16:01.478069] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.663 [2024-10-09 02:16:01.478081] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.922 [2024-10-09 02:16:01.485655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.922 qpair failed and we were unable to recover it. 00:34:41.922 [2024-10-09 02:16:01.498030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.922 [2024-10-09 02:16:01.498099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.922 [2024-10-09 02:16:01.498124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.922 [2024-10-09 02:16:01.498139] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.922 [2024-10-09 02:16:01.498151] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.922 [2024-10-09 02:16:01.505680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.922 qpair failed and we were unable to recover it. 00:34:41.922 [2024-10-09 02:16:01.518103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.922 [2024-10-09 02:16:01.518180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.922 [2024-10-09 02:16:01.518205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.922 [2024-10-09 02:16:01.518224] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.922 [2024-10-09 02:16:01.518236] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.922 [2024-10-09 02:16:01.525798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.922 qpair failed and we were unable to recover it. 00:34:41.922 [2024-10-09 02:16:01.538002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.922 [2024-10-09 02:16:01.538079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.922 [2024-10-09 02:16:01.538106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.922 [2024-10-09 02:16:01.538120] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.922 [2024-10-09 02:16:01.538132] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.922 [2024-10-09 02:16:01.545770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.922 qpair failed and we were unable to recover it. 00:34:41.922 [2024-10-09 02:16:01.558125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.922 [2024-10-09 02:16:01.558204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.922 [2024-10-09 02:16:01.558230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.922 [2024-10-09 02:16:01.558244] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.922 [2024-10-09 02:16:01.558256] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.922 [2024-10-09 02:16:01.565884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.922 qpair failed and we were unable to recover it. 00:34:41.922 [2024-10-09 02:16:01.578260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.922 [2024-10-09 02:16:01.578327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.922 [2024-10-09 02:16:01.578352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.922 [2024-10-09 02:16:01.578367] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.922 [2024-10-09 02:16:01.578379] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.585950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.598281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.598354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.598378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.598392] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.598404] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.605949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.618356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.618423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.618448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.618462] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.618473] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.626081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.638397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.638479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.638504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.638518] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.638530] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.646098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.658438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.658512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.658542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.658556] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.658568] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.666135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.678585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.678660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.678685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.678699] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.678710] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.686232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.698591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.698665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.698696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.698711] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.698722] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.706294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.718652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.718736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.718762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.718776] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.718788] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:41.923 [2024-10-09 02:16:01.726360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.923 qpair failed and we were unable to recover it. 00:34:41.923 [2024-10-09 02:16:01.738701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.923 [2024-10-09 02:16:01.738775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.923 [2024-10-09 02:16:01.738801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.923 [2024-10-09 02:16:01.738817] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.923 [2024-10-09 02:16:01.738829] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.746442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.758786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.758863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.758889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.758904] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.758916] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.766458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.781159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.781234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.781260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.781275] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.781287] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.786567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.798898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.798968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.798993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.799007] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.799019] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.806612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.819013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.819084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.819109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.819123] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.819135] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.826695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.838999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.839077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.839101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.839116] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.839127] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.846775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.859023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.859092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.859117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.859131] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.859143] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.866767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.879119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.879202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.879227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.879242] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.879253] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.886857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.899201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.899268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.899292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.899307] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.899319] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.906814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.919267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.919341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.919366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.919380] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.919392] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.927015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.939393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.939464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.939490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.939506] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.939518] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.947024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.959355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.959428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.959453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.959468] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.959484] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.967037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.183 [2024-10-09 02:16:01.979414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.183 [2024-10-09 02:16:01.979488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.183 [2024-10-09 02:16:01.979512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.183 [2024-10-09 02:16:01.979527] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.183 [2024-10-09 02:16:01.979544] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.183 [2024-10-09 02:16:01.987202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.183 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:01.999593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:01.999664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:01.999689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:01.999704] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:01.999715] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.007244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.019579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.019647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.019672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.019686] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.019698] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.027322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.039645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.039715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.039741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.039755] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.039767] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.047316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.059710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.059779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.059804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.059818] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.059831] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.067397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.079800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.079874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.079899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.079913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.079925] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.087411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.099835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.099905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.099931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.099945] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.099957] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.107563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.119883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.119954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.119979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.119994] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.120006] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.127594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.139961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.140028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.140059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.140074] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.140086] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.147598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.159968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.160047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.160072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.160086] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.160098] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.167712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.179974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.180038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.180062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.180077] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.180089] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.187716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-10-09 02:16:02.200157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.443 [2024-10-09 02:16:02.200233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.443 [2024-10-09 02:16:02.200258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.443 [2024-10-09 02:16:02.200272] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.443 [2024-10-09 02:16:02.200284] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.443 [2024-10-09 02:16:02.207744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.444 [2024-10-09 02:16:02.220259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.444 [2024-10-09 02:16:02.220343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.444 [2024-10-09 02:16:02.220368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.444 [2024-10-09 02:16:02.220383] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.444 [2024-10-09 02:16:02.220395] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.444 [2024-10-09 02:16:02.227932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-10-09 02:16:02.240297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.444 [2024-10-09 02:16:02.240371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.444 [2024-10-09 02:16:02.240396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.444 [2024-10-09 02:16:02.240410] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.444 [2024-10-09 02:16:02.240422] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.444 [2024-10-09 02:16:02.247952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.703 [2024-10-09 02:16:02.260373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.703 [2024-10-09 02:16:02.260454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.703 [2024-10-09 02:16:02.260480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.703 [2024-10-09 02:16:02.260494] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.703 [2024-10-09 02:16:02.260506] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.703 [2024-10-09 02:16:02.268026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.703 qpair failed and we were unable to recover it. 00:34:42.703 [2024-10-09 02:16:02.280373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.703 [2024-10-09 02:16:02.280443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.703 [2024-10-09 02:16:02.280468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.703 [2024-10-09 02:16:02.280482] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.703 [2024-10-09 02:16:02.280494] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.703 [2024-10-09 02:16:02.288004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.703 qpair failed and we were unable to recover it. 00:34:42.703 [2024-10-09 02:16:02.300366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.703 [2024-10-09 02:16:02.300449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.703 [2024-10-09 02:16:02.300474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.703 [2024-10-09 02:16:02.300489] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.703 [2024-10-09 02:16:02.300500] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.703 [2024-10-09 02:16:02.308130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.703 qpair failed and we were unable to recover it. 00:34:42.703 [2024-10-09 02:16:02.320507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.703 [2024-10-09 02:16:02.320587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.703 [2024-10-09 02:16:02.320617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.703 [2024-10-09 02:16:02.320631] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.703 [2024-10-09 02:16:02.320643] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.703 [2024-10-09 02:16:02.328147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.703 qpair failed and we were unable to recover it. 00:34:42.703 [2024-10-09 02:16:02.340524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.703 [2024-10-09 02:16:02.340598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.703 [2024-10-09 02:16:02.340624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.703 [2024-10-09 02:16:02.340638] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.340650] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.348273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.360594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.360669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.360695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.360709] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.360720] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.368329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.380648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.380716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.380741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.380755] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.380768] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.388328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.400717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.400798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.400823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.400838] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.400853] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.410165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.420783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.420855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.420880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.420895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.420907] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.428360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.440788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.440861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.440887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.440901] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.440912] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.448519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.460882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.460948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.460973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.460988] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.460999] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.468613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.480914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.480996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.481022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.481037] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.481048] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.488704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.704 [2024-10-09 02:16:02.500986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.704 [2024-10-09 02:16:02.501053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.704 [2024-10-09 02:16:02.501079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.704 [2024-10-09 02:16:02.501093] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.704 [2024-10-09 02:16:02.501105] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.704 [2024-10-09 02:16:02.508799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.704 qpair failed and we were unable to recover it. 00:34:42.963 [2024-10-09 02:16:02.521082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.963 [2024-10-09 02:16:02.521161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.963 [2024-10-09 02:16:02.521185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.963 [2024-10-09 02:16:02.521200] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.963 [2024-10-09 02:16:02.521211] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.963 [2024-10-09 02:16:02.528751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.963 qpair failed and we were unable to recover it. 00:34:42.963 [2024-10-09 02:16:02.541164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.963 [2024-10-09 02:16:02.541233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.963 [2024-10-09 02:16:02.541258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.963 [2024-10-09 02:16:02.541272] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.963 [2024-10-09 02:16:02.541283] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.963 [2024-10-09 02:16:02.548785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.963 qpair failed and we were unable to recover it. 00:34:42.963 [2024-10-09 02:16:02.561172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.963 [2024-10-09 02:16:02.561252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.963 [2024-10-09 02:16:02.561277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.963 [2024-10-09 02:16:02.561292] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.963 [2024-10-09 02:16:02.561303] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.569176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.581201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.581273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.581300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.581320] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.581332] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.588924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.601237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.601308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.601333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.601347] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.601358] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.608990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.621320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.621392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.621418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.621432] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.621444] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.629045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.641321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.641395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.641420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.641434] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.641446] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.649109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.661442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.661508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.661533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.661552] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.661564] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.669131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.681465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.681545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.681571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.681585] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.681596] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.689200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.701498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.701572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.701598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.701613] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.701625] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.709308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.721596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.721673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.721698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.721713] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.721724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.729316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.741657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.741730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.741755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.741770] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.741782] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.749373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:42.964 [2024-10-09 02:16:02.761797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.964 [2024-10-09 02:16:02.761882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.964 [2024-10-09 02:16:02.761912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.964 [2024-10-09 02:16:02.761926] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.964 [2024-10-09 02:16:02.761938] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:42.964 [2024-10-09 02:16:02.769452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.964 qpair failed and we were unable to recover it. 00:34:43.223 [2024-10-09 02:16:02.781826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.223 [2024-10-09 02:16:02.781898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.223 [2024-10-09 02:16:02.781924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.223 [2024-10-09 02:16:02.781938] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.223 [2024-10-09 02:16:02.781950] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.789569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.801828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.801907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.801932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.801946] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.801958] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.809600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.821924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.821990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.822014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.822029] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.822041] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.829659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.841900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.841977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.842002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.842016] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.842032] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.849722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.862087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.862156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.862181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.862195] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.862207] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.869778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.882128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.882201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.882225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.882240] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.882251] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.889886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.902199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.902269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.902294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.902308] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.902320] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.909912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.922203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.922271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.922296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.922311] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.922322] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.929975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.942211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.942283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.942309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.942322] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.942334] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.950006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.962378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.962451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.962476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.962490] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.962501] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.970037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:02.982390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:02.982458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:02.982483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:02.982497] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:02.982508] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:02.990118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:03.002422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:03.002490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:03.002515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:03.002529] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:03.002544] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:03.010237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.224 [2024-10-09 02:16:03.022483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.224 [2024-10-09 02:16:03.022552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.224 [2024-10-09 02:16:03.022576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.224 [2024-10-09 02:16:03.022595] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.224 [2024-10-09 02:16:03.022606] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.224 [2024-10-09 02:16:03.030213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.224 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.042624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.042699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.042724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.042739] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.042750] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.050390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.062648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.062713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.062740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.062754] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.062766] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.070333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.082722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.082796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.082821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.082835] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.082846] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.090455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.102781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.102849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.102873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.102887] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.102899] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.110497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.122892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.122962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.122987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.123001] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.123013] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.130544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.142847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.142915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.142940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.142954] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.142966] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.150602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.162966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.163041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.163066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.163080] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.163092] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.170670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.183036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.183101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.183126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.484 [2024-10-09 02:16:03.183140] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.484 [2024-10-09 02:16:03.183152] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.484 [2024-10-09 02:16:03.190740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-10-09 02:16:03.203060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.484 [2024-10-09 02:16:03.203133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.484 [2024-10-09 02:16:03.203162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.485 [2024-10-09 02:16:03.203177] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.485 [2024-10-09 02:16:03.203188] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.485 [2024-10-09 02:16:03.210773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.485 qpair failed and we were unable to recover it. 00:34:43.485 [2024-10-09 02:16:03.223092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.485 [2024-10-09 02:16:03.223157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.485 [2024-10-09 02:16:03.223183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.485 [2024-10-09 02:16:03.223197] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.485 [2024-10-09 02:16:03.223209] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.485 [2024-10-09 02:16:03.230849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.485 qpair failed and we were unable to recover it. 00:34:43.485 [2024-10-09 02:16:03.243195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.485 [2024-10-09 02:16:03.243268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.485 [2024-10-09 02:16:03.243294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.485 [2024-10-09 02:16:03.243308] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.485 [2024-10-09 02:16:03.243320] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.485 [2024-10-09 02:16:03.250814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.485 qpair failed and we were unable to recover it. 00:34:43.485 [2024-10-09 02:16:03.263220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.485 [2024-10-09 02:16:03.263291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.485 [2024-10-09 02:16:03.263317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.485 [2024-10-09 02:16:03.263331] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.485 [2024-10-09 02:16:03.263343] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.485 [2024-10-09 02:16:03.270889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.485 qpair failed and we were unable to recover it. 00:34:43.485 [2024-10-09 02:16:03.283399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.485 [2024-10-09 02:16:03.283480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.485 [2024-10-09 02:16:03.283506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.485 [2024-10-09 02:16:03.283520] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.485 [2024-10-09 02:16:03.283532] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.485 [2024-10-09 02:16:03.291016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.485 qpair failed and we were unable to recover it. 00:34:43.744 [2024-10-09 02:16:03.303381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.744 [2024-10-09 02:16:03.303452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.744 [2024-10-09 02:16:03.303477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.744 [2024-10-09 02:16:03.303491] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.744 [2024-10-09 02:16:03.303503] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.744 [2024-10-09 02:16:03.311000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.744 qpair failed and we were unable to recover it. 00:34:43.744 [2024-10-09 02:16:03.323392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.744 [2024-10-09 02:16:03.323468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.744 [2024-10-09 02:16:03.323493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.744 [2024-10-09 02:16:03.323507] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.744 [2024-10-09 02:16:03.323519] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.744 [2024-10-09 02:16:03.331118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.744 qpair failed and we were unable to recover it. 00:34:43.744 [2024-10-09 02:16:03.343439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.744 [2024-10-09 02:16:03.343507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.744 [2024-10-09 02:16:03.343533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.744 [2024-10-09 02:16:03.343551] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.744 [2024-10-09 02:16:03.343563] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.744 [2024-10-09 02:16:03.351163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.744 qpair failed and we were unable to recover it. 00:34:43.744 [2024-10-09 02:16:03.363489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.744 [2024-10-09 02:16:03.363567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.744 [2024-10-09 02:16:03.363592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.744 [2024-10-09 02:16:03.363606] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.744 [2024-10-09 02:16:03.363618] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.744 [2024-10-09 02:16:03.371212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.744 qpair failed and we were unable to recover it. 00:34:43.744 [2024-10-09 02:16:03.383571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.744 [2024-10-09 02:16:03.383639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.744 [2024-10-09 02:16:03.383665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.744 [2024-10-09 02:16:03.383679] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.744 [2024-10-09 02:16:03.383691] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.744 [2024-10-09 02:16:03.391218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.403594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.403664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.403688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.403702] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.403714] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.411269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.423572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.423636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.423661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.423676] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.423687] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.431407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.443757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.443833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.443857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.443872] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.443883] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.451456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.463773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.463840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.463865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.463884] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.463896] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.471453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.483787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.483859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.483885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.483901] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.483913] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.491530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.503891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.503963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.503989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.504003] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.504015] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.511683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.524015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.524087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.524113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.524128] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.524140] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.531702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:43.745 [2024-10-09 02:16:03.544074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.745 [2024-10-09 02:16:03.544142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.745 [2024-10-09 02:16:03.544168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.745 [2024-10-09 02:16:03.544183] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.745 [2024-10-09 02:16:03.544195] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:43.745 [2024-10-09 02:16:03.551742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.745 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.564114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.564192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.564218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.564232] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.564244] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.571860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.584157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.584229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.584254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.584268] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.584279] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.591849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.604290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.604364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.604389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.604403] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.604415] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.611925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.624286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.624351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.624376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.624390] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.624402] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.631995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.644410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.644478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.644507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.644521] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.644533] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.652109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.664413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.664480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.664505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.004 [2024-10-09 02:16:03.664520] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.004 [2024-10-09 02:16:03.664532] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.004 [2024-10-09 02:16:03.672167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.004 qpair failed and we were unable to recover it. 00:34:44.004 [2024-10-09 02:16:03.686987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.004 [2024-10-09 02:16:03.687069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.004 [2024-10-09 02:16:03.687096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.687111] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.687122] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.692271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.704526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.704598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.704624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.704638] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.704650] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.712188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.724530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.724615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.724640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.724655] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.724666] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.732376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.744607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.744679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.744705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.744719] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.744730] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.752344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.764741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.764815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.764841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.764856] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.764867] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.772437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.784779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.784853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.784878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.784892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.784904] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.792462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.005 [2024-10-09 02:16:03.804859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.005 [2024-10-09 02:16:03.804940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.005 [2024-10-09 02:16:03.804965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.005 [2024-10-09 02:16:03.804980] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.005 [2024-10-09 02:16:03.804991] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.005 [2024-10-09 02:16:03.812579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.005 qpair failed and we were unable to recover it. 00:34:44.264 [2024-10-09 02:16:03.824916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.264 [2024-10-09 02:16:03.824992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.264 [2024-10-09 02:16:03.825017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.264 [2024-10-09 02:16:03.825031] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.825042] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.832658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.846732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.846808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.846835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.846849] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.846861] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.852635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.865036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.865105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.865132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.865147] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.865159] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.872729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.885013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.885087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.885112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.885126] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.885137] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.892757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.905093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.905164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.905189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.905203] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.905220] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.912906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.925252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.925322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.925347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.925361] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.925373] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.932989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.945180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.945251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.945276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.945290] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.945302] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.953014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.965346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.965415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.965440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.965454] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.965466] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.973009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:03.985374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:03.985447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:03.985472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:03.985486] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:03.985498] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:03.993090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:04.008977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:04.009058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:04.009085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:04.009100] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:04.009112] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:04.013208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:04.025542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:04.025612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:04.025638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:04.025652] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:04.025664] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:04.033265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:04.045632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:04.045707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:04.045732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:04.045746] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:04.045757] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:04.053324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:04.065648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.265 [2024-10-09 02:16:04.065718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.265 [2024-10-09 02:16:04.065745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.265 [2024-10-09 02:16:04.065760] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.265 [2024-10-09 02:16:04.065773] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:34:44.265 [2024-10-09 02:16:04.073368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:44.265 qpair failed and we were unable to recover it. 00:34:44.265 [2024-10-09 02:16:04.073614] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:44.265 A controller has encountered a failure and is being reset. 00:34:44.265 [2024-10-09 02:16:04.073718] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:44.265 [2024-10-09 02:16:04.074106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:44.524 Controller properly reset. 00:34:45.090 [2024-10-09 02:16:04.631588] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Read completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 Write completed with error (sct=0, sc=8) 00:34:45.090 starting I/O failed 00:34:45.090 [2024-10-09 02:16:04.632611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.657 [2024-10-09 02:16:05.208578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Read completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Read completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Read completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Read completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Read completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.657 Write completed with error (sct=0, sc=8) 00:34:45.657 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Write completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 Read completed with error (sct=0, sc=8) 00:34:45.658 starting I/O failed 00:34:45.658 [2024-10-09 02:16:05.209675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.223 [2024-10-09 02:16:05.784587] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:46.223 Read completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Write completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Write completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Write completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Read completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Read completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Read completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.223 Write completed with error (sct=0, sc=8) 00:34:46.223 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Write completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 Read completed with error (sct=0, sc=8) 00:34:46.224 starting I/O failed 00:34:46.224 [2024-10-09 02:16:05.785707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.224 Initializing NVMe Controllers 00:34:46.224 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:46.224 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:46.224 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:46.224 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:46.224 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:46.224 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:46.224 Initialization complete. Launching workers. 00:34:46.224 Starting thread on core 1 00:34:46.224 Starting thread on core 2 00:34:46.224 Starting thread on core 3 00:34:46.224 Starting thread on core 0 00:34:46.224 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:46.224 00:34:46.224 real 0m13.351s 00:34:46.224 user 0m26.461s 00:34:46.224 sys 0m3.605s 00:34:46.224 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:46.224 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:46.224 ************************************ 00:34:46.224 END TEST nvmf_target_disconnect_tc2 00:34:46.224 ************************************ 00:34:46.481 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:34:46.481 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:34:46.481 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:46.482 ************************************ 00:34:46.482 START TEST nvmf_target_disconnect_tc3 00:34:46.482 ************************************ 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3423570 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:34:46.482 02:16:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:34:48.382 02:16:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3422487 00:34:48.382 02:16:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:34:49.317 [2024-10-09 02:16:08.919605] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Write completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 Read completed with error (sct=0, sc=8) 00:34:49.317 starting I/O failed 00:34:49.317 [2024-10-09 02:16:08.920781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:49.884 [2024-10-09 02:16:09.495593] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Read completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 Write completed with error (sct=0, sc=8) 00:34:49.884 starting I/O failed 00:34:49.884 [2024-10-09 02:16:09.496653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.451 [2024-10-09 02:16:10.071587] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Read completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 Write completed with error (sct=0, sc=8) 00:34:50.451 starting I/O failed 00:34:50.451 [2024-10-09 02:16:10.072911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:50.451 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3422487 Killed "${NVMF_APP[@]}" "$@" 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3424100 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3424100 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3424100 ']' 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.451 02:16:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:50.451 [2024-10-09 02:16:10.235869] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:34:50.451 [2024-10-09 02:16:10.235978] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.709 [2024-10-09 02:16:10.390368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:50.967 [2024-10-09 02:16:10.593579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.967 [2024-10-09 02:16:10.593642] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.967 [2024-10-09 02:16:10.593656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.967 [2024-10-09 02:16:10.593672] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.967 [2024-10-09 02:16:10.593682] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.967 [2024-10-09 02:16:10.596128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:50.967 [2024-10-09 02:16:10.596206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:50.967 [2024-10-09 02:16:10.596269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:50.967 [2024-10-09 02:16:10.596294] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:50.967 [2024-10-09 02:16:10.648579] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:50.967 Read completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Read completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Read completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Write completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Read completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Write completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.967 Write completed with error (sct=0, sc=8) 00:34:50.967 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Read completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 Write completed with error (sct=0, sc=8) 00:34:50.968 starting I/O failed 00:34:50.968 [2024-10-09 02:16:10.649692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.968 [2024-10-09 02:16:10.651636] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:50.968 [2024-10-09 02:16:10.651664] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:50.968 [2024-10-09 02:16:10.651679] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 Malloc0 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 [2024-10-09 02:16:11.203650] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x612000029a40/0x617000007c40) succeed. 00:34:51.556 [2024-10-09 02:16:11.213823] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x612000029bc0/0x617000007fc0) succeed. 00:34:51.556 [2024-10-09 02:16:11.213867] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 [2024-10-09 02:16:11.250371] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.556 02:16:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3423570 00:34:52.123 [2024-10-09 02:16:11.654605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:52.123 qpair failed and we were unable to recover it. 00:34:52.123 [2024-10-09 02:16:11.656547] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:52.123 [2024-10-09 02:16:11.656577] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:52.123 [2024-10-09 02:16:11.656592] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:53.057 [2024-10-09 02:16:12.659500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:53.057 qpair failed and we were unable to recover it. 00:34:53.057 [2024-10-09 02:16:12.661489] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:53.057 [2024-10-09 02:16:12.661520] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:53.057 [2024-10-09 02:16:12.661535] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:53.989 [2024-10-09 02:16:13.664385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:53.989 qpair failed and we were unable to recover it. 00:34:53.990 [2024-10-09 02:16:13.666326] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:53.990 [2024-10-09 02:16:13.666354] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:53.990 [2024-10-09 02:16:13.666372] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:54.950 [2024-10-09 02:16:14.669238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:54.950 qpair failed and we were unable to recover it. 00:34:54.950 [2024-10-09 02:16:14.671175] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:54.950 [2024-10-09 02:16:14.671205] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:54.950 [2024-10-09 02:16:14.671221] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:55.885 [2024-10-09 02:16:15.674101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.885 qpair failed and we were unable to recover it. 00:34:55.885 [2024-10-09 02:16:15.676019] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:55.885 [2024-10-09 02:16:15.676047] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:55.885 [2024-10-09 02:16:15.676063] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:57.264 [2024-10-09 02:16:16.678966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:57.264 qpair failed and we were unable to recover it. 00:34:57.264 [2024-10-09 02:16:16.680946] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:57.264 [2024-10-09 02:16:16.680976] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:57.264 [2024-10-09 02:16:16.680994] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3c40 00:34:58.200 [2024-10-09 02:16:17.683902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:58.200 qpair failed and we were unable to recover it. 00:34:58.200 [2024-10-09 02:16:17.686183] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:58.200 [2024-10-09 02:16:17.686218] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:58.200 [2024-10-09 02:16:17.686233] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb00 00:34:59.138 [2024-10-09 02:16:18.689172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:59.138 qpair failed and we were unable to recover it. 00:34:59.138 [2024-10-09 02:16:18.691110] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:59.138 [2024-10-09 02:16:18.691140] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:59.138 [2024-10-09 02:16:18.691156] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb00 00:35:00.075 [2024-10-09 02:16:19.694022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:00.075 qpair failed and we were unable to recover it. 00:35:00.075 [2024-10-09 02:16:19.696390] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:00.075 [2024-10-09 02:16:19.696436] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:00.075 [2024-10-09 02:16:19.696464] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:35:01.013 [2024-10-09 02:16:20.699367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.013 qpair failed and we were unable to recover it. 00:35:01.013 [2024-10-09 02:16:20.701370] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:01.013 [2024-10-09 02:16:20.701401] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:01.013 [2024-10-09 02:16:20.701415] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:35:02.038 [2024-10-09 02:16:21.704213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.038 qpair failed and we were unable to recover it. 00:35:02.038 [2024-10-09 02:16:21.704510] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:02.038 A controller has encountered a failure and is being reset. 00:35:02.038 Resorting to new failover address 192.168.100.9 00:35:02.038 [2024-10-09 02:16:21.706658] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:02.038 [2024-10-09 02:16:21.706698] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:02.038 [2024-10-09 02:16:21.706719] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:35:02.976 [2024-10-09 02:16:22.709655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:02.976 qpair failed and we were unable to recover it. 00:35:02.976 [2024-10-09 02:16:22.711581] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:02.976 [2024-10-09 02:16:22.711610] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:02.976 [2024-10-09 02:16:22.711625] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4900 00:35:03.912 [2024-10-09 02:16:23.714515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:03.912 qpair failed and we were unable to recover it. 00:35:03.912 [2024-10-09 02:16:23.714792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.912 [2024-10-09 02:16:23.714957] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:04.171 [2024-10-09 02:16:23.757890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:04.171 Controller properly reset. 00:35:04.171 Initializing NVMe Controllers 00:35:04.171 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:04.171 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:04.171 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:04.171 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:04.171 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:04.171 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:04.171 Initialization complete. Launching workers. 00:35:04.171 Starting thread on core 1 00:35:04.171 Starting thread on core 2 00:35:04.171 Starting thread on core 3 00:35:04.171 Starting thread on core 0 00:35:04.430 02:16:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:35:04.430 00:35:04.430 real 0m17.880s 00:35:04.430 user 1m4.996s 00:35:04.430 sys 0m5.127s 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:04.430 ************************************ 00:35:04.430 END TEST nvmf_target_disconnect_tc3 00:35:04.430 ************************************ 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:04.430 rmmod nvme_rdma 00:35:04.430 rmmod nvme_fabrics 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3424100 ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3424100 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3424100 ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3424100 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3424100 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3424100' 00:35:04.430 killing process with pid 3424100 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3424100 00:35:04.430 02:16:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3424100 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:35:06.335 00:35:06.335 real 0m40.738s 00:35:06.335 user 2m32.682s 00:35:06.335 sys 0m14.524s 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:06.335 ************************************ 00:35:06.335 END TEST nvmf_target_disconnect 00:35:06.335 ************************************ 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:06.335 00:35:06.335 real 8m28.143s 00:35:06.335 user 24m37.076s 00:35:06.335 sys 1m55.657s 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.335 02:16:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.335 ************************************ 00:35:06.335 END TEST nvmf_host 00:35:06.335 ************************************ 00:35:06.335 02:16:25 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:35:06.335 00:35:06.335 real 27m57.714s 00:35:06.335 user 81m3.781s 00:35:06.335 sys 6m49.668s 00:35:06.335 02:16:25 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.335 02:16:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:06.335 ************************************ 00:35:06.335 END TEST nvmf_rdma 00:35:06.335 ************************************ 00:35:06.335 02:16:25 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:06.335 02:16:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:06.335 02:16:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:06.335 02:16:25 -- common/autotest_common.sh@10 -- # set +x 00:35:06.335 ************************************ 00:35:06.335 START TEST spdkcli_nvmf_rdma 00:35:06.335 ************************************ 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:06.335 * Looking for test storage... 00:35:06.335 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:06.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.335 --rc genhtml_branch_coverage=1 00:35:06.335 --rc genhtml_function_coverage=1 00:35:06.335 --rc genhtml_legend=1 00:35:06.335 --rc geninfo_all_blocks=1 00:35:06.335 --rc geninfo_unexecuted_blocks=1 00:35:06.335 00:35:06.335 ' 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:06.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.335 --rc genhtml_branch_coverage=1 00:35:06.335 --rc genhtml_function_coverage=1 00:35:06.335 --rc genhtml_legend=1 00:35:06.335 --rc geninfo_all_blocks=1 00:35:06.335 --rc geninfo_unexecuted_blocks=1 00:35:06.335 00:35:06.335 ' 00:35:06.335 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:06.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.335 --rc genhtml_branch_coverage=1 00:35:06.335 --rc genhtml_function_coverage=1 00:35:06.336 --rc genhtml_legend=1 00:35:06.336 --rc geninfo_all_blocks=1 00:35:06.336 --rc geninfo_unexecuted_blocks=1 00:35:06.336 00:35:06.336 ' 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:06.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.336 --rc genhtml_branch_coverage=1 00:35:06.336 --rc genhtml_function_coverage=1 00:35:06.336 --rc genhtml_legend=1 00:35:06.336 --rc geninfo_all_blocks=1 00:35:06.336 --rc geninfo_unexecuted_blocks=1 00:35:06.336 00:35:06.336 ' 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.336 02:16:25 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e71deb-ee4e-e711-906e-0012795d9712 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=80e71deb-ee4e-e711-906e-0012795d9712 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.336 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3426154 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3426154 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 3426154 ']' 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:06.336 02:16:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:06.336 [2024-10-09 02:16:26.104260] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.03.0 initialization... 00:35:06.336 [2024-10-09 02:16:26.104363] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426154 ] 00:35:06.594 [2024-10-09 02:16:26.234096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:06.890 [2024-10-09 02:16:26.441423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.890 [2024-10-09 02:16:26.441429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:07.149 02:16:26 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.150 02:16:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x8086 - 0x159b)' 00:35:13.717 Found 0000:18:00.0 (0x8086 - 0x159b) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x8086 - 0x159b)' 00:35:13.717 Found 0000:18:00.1 (0x8086 - 0x159b) 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:13.717 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # (( 1 != 1 )) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@403 -- # modinfo irdma 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@403 -- # modprobe irdma roce_ena=1 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: cvl_0_0' 00:35:13.718 Found net devices under 0000:18:00.0: cvl_0_0 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: cvl_0_1' 00:35:13.718 Found net devices under 0000:18:00.1: cvl_0_1 00:35:13.718 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # is_hw=yes 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # rdma_device_init 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:13.976 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:35:13.977 28: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:13.977 link/ether b4:96:91:dd:40:26 brd ff:ff:ff:ff:ff:ff 00:35:13.977 altname enp24s0f0np0 00:35:13.977 altname ens785f0np0 00:35:13.977 inet 192.168.100.8/24 scope global cvl_0_0 00:35:13.977 valid_lft forever preferred_lft forever 00:35:13.977 inet6 fe80::b696:91ff:fedd:4026/64 scope link proto kernel_ll 00:35:13.977 valid_lft forever preferred_lft forever 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:35:13.977 29: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:13.977 link/ether b4:96:91:dd:40:27 brd ff:ff:ff:ff:ff:ff 00:35:13.977 altname enp24s0f1np1 00:35:13.977 altname ens785f1np1 00:35:13.977 inet 192.168.100.9/24 scope global cvl_0_1 00:35:13.977 valid_lft forever preferred_lft forever 00:35:13.977 inet6 fe80::b696:91ff:fedd:4027/64 scope link proto kernel_ll 00:35:13.977 valid_lft forever preferred_lft forever 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # return 0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\1 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:35:13.977 192.168.100.9' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:35:13.977 192.168.100.9' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # head -n 1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:35:13.977 192.168.100.9' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # tail -n +2 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # head -n 1 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:13.977 02:16:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:13.977 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:13.977 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:13.977 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:13.977 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:13.977 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:13.977 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:13.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:13.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:13.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:13.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:13.977 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:13.977 ' 00:35:17.264 [2024-10-09 02:16:36.436494] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f0(0x61200002b9c0/0x617000007c40) succeed. 00:35:17.264 [2024-10-09 02:16:36.447249] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device rocep24s0f1(0x61200002bb40/0x617000007fc0) succeed. 00:35:17.264 [2024-10-09 02:16:36.447286] rdma.c:2804:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:35:17.264 [2024-10-09 02:16:36.449724] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:35:17.264 [2024-10-09 02:16:36.449765] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:35:17.264 [2024-10-09 02:16:36.451121] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:35:17.264 [2024-10-09 02:16:36.453192] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:35:17.264 [2024-10-09 02:16:36.453221] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:35:17.264 [2024-10-09 02:16:36.454580] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:35:18.200 [2024-10-09 02:16:37.671077] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:35:20.730 [2024-10-09 02:16:39.922554] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:35:22.104 [2024-10-09 02:16:41.917265] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:24.008 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:24.008 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:24.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:24.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:24.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:24.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:24.008 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:35:24.008 02:16:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:24.266 02:16:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:24.266 02:16:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:24.524 02:16:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:24.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:24.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:24.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:24.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:35:24.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:35:24.524 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:24.524 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:24.524 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:24.524 ' 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:35:31.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:35:31.089 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:31.089 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:31.089 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3426154 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 3426154 ']' 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 3426154 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3426154 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3426154' 00:35:31.089 killing process with pid 3426154 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 3426154 00:35:31.089 02:16:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 3426154 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:31.656 rmmod nvme_rdma 00:35:31.656 rmmod nvme_fabrics 00:35:31.656 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:35:31.657 00:35:31.657 real 0m25.580s 00:35:31.657 user 0m53.916s 00:35:31.657 sys 0m6.309s 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:31.657 02:16:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:31.657 ************************************ 00:35:31.657 END TEST spdkcli_nvmf_rdma 00:35:31.657 ************************************ 00:35:31.657 02:16:51 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:31.657 02:16:51 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:31.657 02:16:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:31.657 02:16:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:31.657 02:16:51 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:31.657 02:16:51 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:31.657 02:16:51 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:31.657 02:16:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:31.657 02:16:51 -- common/autotest_common.sh@10 -- # set +x 00:35:31.657 02:16:51 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:31.657 02:16:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:31.657 02:16:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:31.657 02:16:51 -- common/autotest_common.sh@10 -- # set +x 00:35:36.926 INFO: APP EXITING 00:35:36.926 INFO: killing all VMs 00:35:36.926 INFO: killing vhost app 00:35:36.926 INFO: EXIT DONE 00:35:39.463 Waiting for block devices as requested 00:35:39.463 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.463 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.463 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.463 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:39.463 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:39.721 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:39.721 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.721 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:39.980 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.980 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.980 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:40.238 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:40.238 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:40.238 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:40.496 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:40.496 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.496 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:43.779 Cleaning 00:35:43.779 Removing: /var/run/dpdk/spdk0/config 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:43.779 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:43.779 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:43.779 Removing: /var/run/dpdk/spdk1/config 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:43.779 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:43.779 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:43.779 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:43.779 Removing: /var/run/dpdk/spdk2/config 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:43.779 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:43.779 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:43.779 Removing: /var/run/dpdk/spdk3/config 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:43.779 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:43.779 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:43.779 Removing: /var/run/dpdk/spdk4/config 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:43.779 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:43.779 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:43.779 Removing: /dev/shm/bdevperf_trace.pid3127796 00:35:43.779 Removing: /dev/shm/bdev_svc_trace.1 00:35:43.779 Removing: /dev/shm/nvmf_trace.0 00:35:43.779 Removing: /dev/shm/spdk_tgt_trace.pid3080213 00:35:43.779 Removing: /var/run/dpdk/spdk0 00:35:43.779 Removing: /var/run/dpdk/spdk1 00:35:43.779 Removing: /var/run/dpdk/spdk2 00:35:43.779 Removing: /var/run/dpdk/spdk3 00:35:43.779 Removing: /var/run/dpdk/spdk4 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3076606 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3078114 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3080213 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3081101 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3082176 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3082713 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3083764 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3083853 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3084496 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3089416 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3091036 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3091704 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3092351 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3092996 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3093593 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3093801 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3094009 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3094405 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3095342 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3098042 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3098604 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3099245 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3099343 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3100776 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3100960 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3102549 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3102742 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3103710 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3103848 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3104390 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3104568 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3105740 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3106075 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3106350 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3110335 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3114167 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3122850 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3123523 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3127796 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3128028 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3132006 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3137256 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3139580 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3149611 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3171395 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3175021 00:35:43.779 Removing: /var/run/dpdk/spdk_pid3246649 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3251695 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3256546 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3264396 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3290341 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3294340 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3327118 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3328490 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3329905 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3334140 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3340788 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3341667 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3342545 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3343418 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3343766 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3348436 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3348448 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3352612 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3352991 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3353491 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3353506 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3356325 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3357723 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3359242 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3360669 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3362066 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3363474 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3369022 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3369489 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3371999 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3374001 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3383089 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3385372 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3390318 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3399180 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3399189 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3416000 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3416358 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3421628 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3421956 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3423570 00:35:43.780 Removing: /var/run/dpdk/spdk_pid3426154 00:35:43.780 Clean 00:35:43.780 02:17:03 -- common/autotest_common.sh@1451 -- # return 0 00:35:43.780 02:17:03 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:35:43.780 02:17:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:43.780 02:17:03 -- common/autotest_common.sh@10 -- # set +x 00:35:43.780 02:17:03 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:35:43.780 02:17:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:43.780 02:17:03 -- common/autotest_common.sh@10 -- # set +x 00:35:44.038 02:17:03 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:35:44.038 02:17:03 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log ]] 00:35:44.038 02:17:03 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log 00:35:44.038 02:17:03 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:35:44.038 02:17:03 -- spdk/autotest.sh@394 -- # hostname 00:35:44.038 02:17:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -t spdk-wfp-38 -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info 00:35:44.038 geninfo: WARNING: invalid characters removed from testname! 00:36:05.963 02:17:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:06.905 02:17:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:08.805 02:17:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:10.706 02:17:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:12.607 02:17:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:13.982 02:17:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:15.885 02:17:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:15.885 02:17:35 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:36:15.885 02:17:35 -- common/autotest_common.sh@1681 -- $ lcov --version 00:36:15.885 02:17:35 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:36:15.885 02:17:35 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:36:15.885 02:17:35 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:15.885 02:17:35 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:15.885 02:17:35 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:15.885 02:17:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:15.885 02:17:35 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:15.885 02:17:35 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:15.885 02:17:35 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:15.885 02:17:35 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:15.885 02:17:35 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:15.885 02:17:35 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:15.885 02:17:35 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:15.885 02:17:35 -- scripts/common.sh@344 -- $ case "$op" in 00:36:15.885 02:17:35 -- scripts/common.sh@345 -- $ : 1 00:36:15.885 02:17:35 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:15.885 02:17:35 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.885 02:17:35 -- scripts/common.sh@365 -- $ decimal 1 00:36:15.885 02:17:35 -- scripts/common.sh@353 -- $ local d=1 00:36:15.885 02:17:35 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:15.885 02:17:35 -- scripts/common.sh@355 -- $ echo 1 00:36:15.885 02:17:35 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:15.885 02:17:35 -- scripts/common.sh@366 -- $ decimal 2 00:36:15.885 02:17:35 -- scripts/common.sh@353 -- $ local d=2 00:36:15.885 02:17:35 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:15.885 02:17:35 -- scripts/common.sh@355 -- $ echo 2 00:36:15.885 02:17:35 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:15.885 02:17:35 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:15.885 02:17:35 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:15.885 02:17:35 -- scripts/common.sh@368 -- $ return 0 00:36:15.885 02:17:35 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.885 02:17:35 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:36:15.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.885 --rc genhtml_branch_coverage=1 00:36:15.885 --rc genhtml_function_coverage=1 00:36:15.885 --rc genhtml_legend=1 00:36:15.885 --rc geninfo_all_blocks=1 00:36:15.885 --rc geninfo_unexecuted_blocks=1 00:36:15.885 00:36:15.885 ' 00:36:15.885 02:17:35 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:36:15.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.885 --rc genhtml_branch_coverage=1 00:36:15.885 --rc genhtml_function_coverage=1 00:36:15.885 --rc genhtml_legend=1 00:36:15.885 --rc geninfo_all_blocks=1 00:36:15.885 --rc geninfo_unexecuted_blocks=1 00:36:15.885 00:36:15.885 ' 00:36:15.885 02:17:35 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:36:15.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.885 --rc genhtml_branch_coverage=1 00:36:15.885 --rc genhtml_function_coverage=1 00:36:15.885 --rc genhtml_legend=1 00:36:15.885 --rc geninfo_all_blocks=1 00:36:15.885 --rc geninfo_unexecuted_blocks=1 00:36:15.885 00:36:15.885 ' 00:36:15.885 02:17:35 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:36:15.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.885 --rc genhtml_branch_coverage=1 00:36:15.885 --rc genhtml_function_coverage=1 00:36:15.885 --rc genhtml_legend=1 00:36:15.885 --rc geninfo_all_blocks=1 00:36:15.885 --rc geninfo_unexecuted_blocks=1 00:36:15.885 00:36:15.885 ' 00:36:15.885 02:17:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:36:15.885 02:17:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:15.885 02:17:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:15.885 02:17:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.885 02:17:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.885 02:17:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.885 02:17:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.885 02:17:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.885 02:17:35 -- paths/export.sh@5 -- $ export PATH 00:36:15.885 02:17:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.885 02:17:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:36:15.885 02:17:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:36:15.885 02:17:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728433055.XXXXXX 00:36:16.145 02:17:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728433055.0aMAuE 00:36:16.145 02:17:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:36:16.145 02:17:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:36:16.145 02:17:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:36:16.145 02:17:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:16.145 02:17:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:16.145 02:17:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:36:16.145 02:17:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:16.145 02:17:35 -- common/autotest_common.sh@10 -- $ set +x 00:36:16.145 02:17:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:36:16.145 02:17:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:36:16.145 02:17:35 -- pm/common@17 -- $ local monitor 00:36:16.145 02:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:16.145 02:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:16.145 02:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:16.145 02:17:35 -- pm/common@21 -- $ date +%s 00:36:16.145 02:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:16.145 02:17:35 -- pm/common@21 -- $ date +%s 00:36:16.145 02:17:35 -- pm/common@21 -- $ date +%s 00:36:16.145 02:17:35 -- pm/common@25 -- $ sleep 1 00:36:16.145 02:17:35 -- pm/common@21 -- $ date +%s 00:36:16.145 02:17:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728433055 00:36:16.145 02:17:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728433055 00:36:16.145 02:17:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728433055 00:36:16.145 02:17:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728433055 00:36:16.145 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728433055_collect-vmstat.pm.log 00:36:16.145 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728433055_collect-cpu-load.pm.log 00:36:16.145 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728433055_collect-cpu-temp.pm.log 00:36:16.145 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728433055_collect-bmc-pm.bmc.pm.log 00:36:17.084 02:17:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:36:17.084 02:17:36 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:17.084 02:17:36 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:17.084 02:17:36 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:17.084 02:17:36 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:17.084 02:17:36 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:36:17.084 02:17:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:17.084 02:17:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:17.084 02:17:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:17.084 02:17:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.084 02:17:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:17.084 02:17:36 -- pm/common@44 -- $ pid=3440430 00:36:17.084 02:17:36 -- pm/common@50 -- $ kill -TERM 3440430 00:36:17.084 02:17:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.084 02:17:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:17.084 02:17:36 -- pm/common@44 -- $ pid=3440432 00:36:17.084 02:17:36 -- pm/common@50 -- $ kill -TERM 3440432 00:36:17.084 02:17:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.084 02:17:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:17.084 02:17:36 -- pm/common@44 -- $ pid=3440434 00:36:17.084 02:17:36 -- pm/common@50 -- $ kill -TERM 3440434 00:36:17.084 02:17:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:17.084 02:17:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:17.084 02:17:36 -- pm/common@44 -- $ pid=3440452 00:36:17.084 02:17:36 -- pm/common@50 -- $ sudo -E kill -TERM 3440452 00:36:17.084 + [[ -n 3003453 ]] 00:36:17.084 + sudo kill 3003453 00:36:17.093 [Pipeline] } 00:36:17.107 [Pipeline] // stage 00:36:17.111 [Pipeline] } 00:36:17.124 [Pipeline] // timeout 00:36:17.128 [Pipeline] } 00:36:17.141 [Pipeline] // catchError 00:36:17.145 [Pipeline] } 00:36:17.158 [Pipeline] // wrap 00:36:17.164 [Pipeline] } 00:36:17.175 [Pipeline] // catchError 00:36:17.183 [Pipeline] stage 00:36:17.185 [Pipeline] { (Epilogue) 00:36:17.197 [Pipeline] catchError 00:36:17.198 [Pipeline] { 00:36:17.209 [Pipeline] echo 00:36:17.211 Cleanup processes 00:36:17.216 [Pipeline] sh 00:36:17.602 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:17.602 3440564 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/sdr.cache 00:36:17.602 3440832 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:17.637 [Pipeline] sh 00:36:17.922 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:17.922 ++ grep -v 'sudo pgrep' 00:36:17.922 ++ awk '{print $1}' 00:36:17.922 + sudo kill -9 00:36:17.922 + true 00:36:17.933 [Pipeline] sh 00:36:18.219 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:28.208 [Pipeline] sh 00:36:28.490 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:28.490 Artifacts sizes are good 00:36:28.504 [Pipeline] archiveArtifacts 00:36:28.510 Archiving artifacts 00:36:28.640 [Pipeline] sh 00:36:28.924 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:36:28.936 [Pipeline] cleanWs 00:36:28.944 [WS-CLEANUP] Deleting project workspace... 00:36:28.944 [WS-CLEANUP] Deferred wipeout is used... 00:36:28.949 [WS-CLEANUP] done 00:36:28.951 [Pipeline] } 00:36:28.964 [Pipeline] // catchError 00:36:28.974 [Pipeline] sh 00:36:29.256 + logger -p user.info -t JENKINS-CI 00:36:29.265 [Pipeline] } 00:36:29.278 [Pipeline] // stage 00:36:29.283 [Pipeline] } 00:36:29.296 [Pipeline] // node 00:36:29.302 [Pipeline] End of Pipeline 00:36:29.351 Finished: SUCCESS